Refine
Year of publication
- 2016 (291) (remove)
Document Type
- Article (221)
- Doctoral Thesis (37)
- Other (14)
- Postprint (12)
- Review (5)
- Habilitation Thesis (1)
- Report (1)
Is part of the Bibliography
- yes (291) (remove)
Keywords
- climate change (5)
- erosion (5)
- water balance (5)
- Pollen (4)
- August 2002 flood (3)
- Body waves (3)
- Central Europe (3)
- Floods Directive (3)
- Himalaya (3)
- Himalayas (3)
Institute
- Institut für Geowissenschaften (291) (remove)
A link between chemical weathering and physical erosion exists at the catchment scale over a wide range of erosion rates(1,2). However, in mountain environments, where erosion rates are highest, weathering may be kinetically limited(3-5) and therefore decoupled from erosion. In active mountain belts, erosion is driven by bedrock landsliding(6) at rates that depend strongly on the occurrence of extreme rainfall or seismicity(7). Although landslides affect only a small proportion of the landscape, bedrock landsliding can promote the collection and slow percolation of surface runoff in highly fragmented rock debris and create favourable conditions for weathering. Here we show from analysis of surface water chemistry in the Southern Alps of New Zealand that weathering in bedrock landslides controls the variability in solute load of these mountain rivers. We find that systematic patterns in surface water chemistry are strongly associated with landslide occurrence at scales from a single hillslope to an entire mountain belt, and that landslides boost weathering rates and river solute loads over decades. We conclude that landslides couple erosion and weathering in fast-eroding uplands and, thus, mountain weathering is a stochastic process that is sensitive to climatic and tectonic controls on mass wasting processes.
This manuscript proposes a method to assess hydrological drought in semi-arid environments under high impoundment rate and applies it to the semi-arid Jaguaribe River basin in Brazil. It analyzes droughts (1) in the largest reservoir systems; (2) in the Upper Basin, considering 4744 reservoirs, 800 wells and almost 18,000 cisterns; and (3) in reservoirs of different sizes during multiyear droughts. Results show that the water demand is constrained in the basin; hydrological and meteorological droughts are often out of phase; there is a negative correlation between storage level and drought severity; and the small systems cannot cope with long-term droughts.
The Dead Sea region has faced substantial environmental challenges in recent decades, including water resource scarcity, similar to 1 m annual decreases in the water level, sinkhole development, ascending-brine freshwater pollution, and seismic disturbance risks. Natural processes are significantly affected by human interference as well as by climate change and tectonic developments over the long term. To get a deep understanding of processes and their interactions, innovative scientific approaches that integrate disciplinary research and education are required. The research project DESERVE (Helmholtz Virtual Institute Dead Sea Research Venue) addresses these challenges in an interdisciplinary approach that includes geophysics, hydrology, and meteorology. The project is implemented by a consortium of scientific institutions in neighboring countries of the Dead Sea (Israel, Jordan, Palestine Territories) and participating German Helmholtz Centres (KIT, GFZ, UFZ). A new monitoring network of meteorological, hydrological, and seismic/geodynamic stations has been established, and extensive field research and numerical simulations have been undertaken. For the first time, innovative measurement and modeling techniques have been applied to the extreme conditions of the Dead Sea and its surroundings. The preliminary results show the potential of these methods. First time ever performed eddy covariance measurements give insight into the governing factors of Dead Sea evaporation. High-resolution bathymetric investigations reveal a strong correlation between submarine springs and neo-tectonic patterns. Based on detailed studies of stratigraphy and borehole information, the extension of the subsurface drainage basin of the Dead Sea is now reliably estimated. Originality has been achieved in monitoring flash floods in an arid basin at its outlet and simultaneously in tributaries, supplemented by spatio-temporal rainfall data. Low-altitude, high resolution photogrammetry, allied to satellite image analysis and to geophysical surveys (e.g. shear-wave reflections) has enabled a more detailed characterization of sinkhole morphology and temporal development and the possible subsurface controls thereon. All the above listed efforts and scientific results take place with the interdisciplinary education of young scientists. They are invited to attend joint thematic workshops and winter schools as well as to participate in field experiments. (C) 2015 The Authors. Published by Elsevier B.V.
Oceanic lithospheric S-wave velocities from the analysis of P-wave polarization at the ocean floor
(2016)
Our knowledge of the absolute S-wave velocities of the oceanic lithosphere is mainly based on global surface wave tomography, local active seismic or compliance measurements using oceanic infragravity waves. The results of tomography give a rather smooth picture of the actual S-wave velocity structure and local measurements have limitations regarding the range of elastic parameters or the geometry of the measurement. Here, we use the P-wave polarization (apparent P-wave incidence angle) of teleseismic events to investigate the S-wave velocity structure of the oceanic crust and the upper tens of kilometres of the mantle beneath single stations. In this study, we present an up to our knowledge new relation of the apparent P-wave incidence angle at the ocean bottom dependent on the half-space S-wave velocity. We analyse the angle in different period ranges at ocean bottom stations (OBSs) to derive apparent S-wave velocity profiles. These profiles are dependent on the S-wave velocity as well as on the thickness of the layers in the subsurface. Consequently, their interpretation results in a set of equally valid models. We analyse the apparent P-wave incidence angles of an OBS data set which was collected in the Eastern Mid Atlantic. We are able to determine reasonable S-wave-velocity-depth models by a three-step quantitative modelling after a manual data quality control, although layer resonance sometimes influences the estimated apparent S-wave velocities. The apparent S-wave velocity profiles are well explained by an oceanic PREM model in which the upper part is replaced by four layers consisting of a water column, a sediment, a crust and a layer representing the uppermost mantle. The obtained sediment has a thickness between 0.3 and 0.9 km with S-wave velocities between 0.7 and 1.4 km s(-1). The estimated total crustal thickness varies between 4 and 10 km with S-wave velocities between 3.5 and 4.3 km s(-1). We find a slight increase of the total crustal thickness from similar to 5 to similar to 8 km towards the South in the direction of a major plate boundary, the Gloria Fault. The observed crustal thickening can be related with the known dominant compression in the vicinity of the fault. Furthermore, the resulting mantle S-wave velocities decrease from values around 5.5 to 4.5 km s(-1) towards the fault. This decrease is probably caused by serpentinization and indicates that the oceanic transform fault affects a broad region in the uppermost mantle. Conclusively, the presented method is useful for the estimation of the local S-wave velocity structure beneath ocean bottom seismic stations. It is easy to implement and consists of two main steps: (1) measurement of apparent P-wave incidence angles in different period ranges for real and synthetic data, and (2) comparison of the determined apparent S-wave velocities for real and synthetic data to estimate S-wave velocity-depth models.
Rapidly uplifting coastlines are frequently associated with convergent tectonic boundaries, like subduction zones, which are repeatedly breached by giant megathrust earthquakes. The coastal relief along tectonically active realms is shaped by the effect of sea-level variations and heterogeneous patterns of permanent tectonic deformation, which are accumulated through several cycles of megathrust earthquakes. However, the correlation between earthquake deformation patterns and the sustained long-term segmentation of forearcs, particularly in Chile, remains poorly understood. Furthermore, the methods used to estimate permanent deformation from geomorphic markers, like marine terraces, have remained qualitative and are based on unrepeatable methods. This contrasts with the increasing resolution of digital elevation models, such as Light Detection and Ranging (LiDAR) and high-resolution bathymetric surveys.
Throughout this thesis I study permanent deformation in a holistic manner: from the methods to assess deformation rates, to the processes involved in its accumulation. My research focuses particularly on two aspects: Developing methodologies to assess permanent deformation using marine terraces, and comparing permanent deformation with seismic cycle deformation patterns under different spatial scales along the M8.8 Maule earthquake (2010) rupture zone. Two methods are developed to determine deformation rates from wave-built and wave-cut terraces respectively. I selected an archetypal example of a wave-built terrace at Santa Maria Island studying its stratigraphy and recognizing sequences of reoccupation events tied with eleven radiocarbon sample ages (14C ages). I developed a method to link patterns of reoccupation with sea-level proxies by iterating relative sea level curves for a range of uplift rates. I find the best fit between relative sea-level and the stratigraphic patterns for an uplift rate of 1.5 +- 0.3 m/ka.
A Graphical User Interface named TerraceM® was developed in Matlab®. This novel software tool determines shoreline angles in wave-cut terraces under different geomorphic scenarios. To validate the methods, I select test sites in areas of available high-resolution LiDAR topography along the Maule earthquake rupture zone and in California, USA. The software allows determining the 3D location of the shoreline angle, which is a proxy for the estimation of permanent deformation rates. The method is based on linear interpolations to define the paleo platform and cliff on swath profiles. The shoreline angle is then located by intersecting these interpolations. The
accuracy and precision of TerraceM® was tested by comparing its results with previous assessments, and through an experiment with students in a computer lab setting at the University
of Potsdam.
I combined the methods developed to analyze wave-built and wave-cut terraces to assess regional patterns of permanent deformation along the (2010) Maule earthquake rupture. Wave-built terraces are tied using 12 Infra Red Stimulated luminescence ages (IRSL ages) and shoreline angles in wave-cut terraces are estimated from 170 aligned swath profiles. The comparison of coseismic slip, interseismic coupling, and permanent deformation, leads to three areas of high permanent uplift, terrace warping, and sharp fault offsets. These three areas correlate with regions of high slip and low coupling, as well as with the spatial limit of at least eight historical megathrust ruptures (M8-9.5). I propose that the zones of upwarping at Arauco and Topocalma reflect changes in frictional properties of the megathrust, which result in discrete boundaries for the propagation of mega earthquakes.
To explore the application of geomorphic markers and quantitative morphology in offshore areas I performed a local study of patterns of permanent deformation inferred from hitherto unrecognized drowned shorelines at the Arauco Bay, at the southern part of the (2010) Maule earthquake rupture zone. A multidisciplinary approach, including morphometry, sedimentology, paleontology, 3D morphoscopy, and a landscape Evolution Model is used to recognize, map, and assess local rates and patterns of permanent deformation in submarine environments. Permanent deformation patterns are then reproduced using elastic models to assess deformation rates of an active submarine splay fault defined as Santa Maria Fault System. The best fit suggests a reverse structure with a slip rate of 3.7 m/ka for the last 30 ka. The register of land level changes during the earthquake cycle at Santa Maria Island suggest that most of the deformation may be accrued through splay fault reactivation during mega earthquakes, like the (2010) Maule event. Considering a recurrence time of 150 to 200 years, as determined from historical and geological observations, slip between 0.3 and 0.7 m per event would be required to account for the 3.7 m/ka millennial slip rate. However, if the SMFS slips only every ~1000 years, representing a few megathrust earthquakes, then a slip of ~3.5 m per event would be required to account for the long- term rate. Such event would be equivalent to a magnitude ~6.7 earthquake capable to generate a local tsunami.
The results of this thesis provide novel and fundamental information regarding the amount of permanent deformation accrued in the crust, and the mechanisms responsible for this accumulation at millennial time-scales along the M8.8 Maule earthquake (2010) rupture zone. Furthermore, the results of this thesis highlight the application of quantitative geomorphology and the use of repeatable methods to determine permanent deformation, improve the accuracy of marine terrace assessments, and estimates of vertical deformation rates in tectonically active coastal areas. This is vital information for adequate coastal-hazard assessments and to anticipate realistic earthquake and tsunami scenarios.
The repeated occurrence of exceptional floods within a few years, such as the Rhine floods in 1993 and 1995 and the Elbe and Danube floods in 2002 and 2013, suggests that floods in Central Europe may be organized in flood-rich and flood-poor periods. This hypothesis is studied by testing the significance of temporal clustering in flood occurrence (peak-over-threshold) time series for 68 catchments across Germany for the period 1932-2005. To assess the robustness of the results, different methods are used: Firstly, the index of dispersion, which quantifies the departure from a homogeneous Poisson process, is investigated. Further, the time-variation of the flood occurrence rate is derived by non-parametric kernel implementation and the significance of clustering is evaluated via parametric and non-parametric tests. Although the methods give consistent overall results, the specific results differ considerably. Hence, we recommend applying different methods when investigating flood clustering. For flood estimation and risk management, it is of relevance to understand whether clustering changes with flood severity and time scale. To this end, clustering is assessed for different thresholds and time scales. It is found that the majority of catchments show temporal clustering at the 5% significance level for low thresholds and time scales of one to a few years. However, clustering decreases substantially with increasing threshold and time scale. We hypothesize that flood clustering in Germany is mainly caused by catchment memory effects along with intra- to inter-annual climate variability, and that decadal climate variability plays a minor role. (C) 2016 Elsevier B.V. All rights reserved.
To understand past flood changes in the Rhine catchment and in particular the role of anthropogenic climate change in extreme flows, an attribution study relying on a proper GCM (general circulation model) downscaling is needed. A downscaling based on conditioning a stochastic weather generator on weather patterns is a promising approach. This approach assumes a strong link between weather patterns and local climate, and sufficient GCM skill in reproducing weather pattern climatology. These presuppositions are unprecedentedly evaluated here using 111 years of daily climate data from 490 stations in the Rhine basin and comprehensively testing the number of classification parameters and GCM weather pattern characteristics. A classification based on a combination of mean sea level pressure, temperature, and humidity from the ERA20C reanalysis of atmospheric fields over central Europe with 40 weather types was found to be the most appropriate for stratifying six local climate variables. The corresponding skill is quite diverse though, ranging from good for radiation to poor for precipitation. Especially for the latter it was apparent that pressure fields alone cannot sufficiently stratify local variability. To test the skill of the latest generation of GCMs from the CMIP5 ensemble in reproducing the frequency, seasonality, and persistence of the derived weather patterns, output from 15 GCMs is evaluated. Most GCMs are able to capture these characteristics well, but some models showed consistent deviations in all three evaluation criteria and should be excluded from further attribution analysis.
During expedition 202 aboard the RV Sonne in 2009, 39 seafloor surface sediment sites were sampled over a wide sector of the North Pacific and adjoining Bering Sea. The data served to infer land-ocean linkages of terrigenous sediment supply in terms of major sources and modes of sediment transport within an over-regional context. This is based on an integrated approach dealing with grain-size analysis, bulk mineralogy and clay mineralogy in combination with statistical data evaluation (end-member modelling of grain-size data, fuzzy cluster analysis of mineralogical data). The findings on clay mineralogy served to update those of earlier work extracted from the literature. Today, two processes of terrigenous sediment supply prevail in the study area: far-distance aeolian sediment supply to the pelagic North Pacific, and hemipelagic sediment dispersal from nearby land sources via ocean currents along the continental margins and island arcs. Aeolian particles show the finest grain sizes (clay and fine silt), whereas hemipelagic sediments have high abundances of coarse silt. Exposed sites on seamounts and the continental slope are partly swept by strong currents, leading to residual enrichment of fine sand. Four sediment sources can be distinguished on the basis of distinct index minerals revealed by statistical data analysis: dust plumes from central Asia (quartz, illite), altered materials from the volcanic regions of Kamchatka and the Aleutian Arc (smectite), detritus from the Alaskan Cordillera (chlorite, hornblende), and fluvial detritus from far-eastern Siberia and the Alaska mainland (quartz, feldspar, illite). These findings confirm those of former studies but considerably expand the geographic range of this suite of proxies as far south as 39A degrees N in the open North Pacific. The present integrated methodological approach proved useful in identifying the major modern processes of terrigenous sediment supply to the study region. This aspect deserves attention in the selection of sediment core sites for future palaeoenvironmental reconstructions related to aeolian and glacial dynamics, as well as the recognition of palaeo-ocean circulation patterns in general.
In the past, floods were basically managed by flood control mechanisms. The focus was set on the reduction of flood hazard. The potential consequences were of minor interest. Nowadays river flooding is increasingly seen from the risk perspective, including possible consequences. Moreover, the large-scale picture of flood risk became increasingly important for disaster management planning, national risk developments and the (re-) insurance industry. Therefore, it is widely accepted that risk-orientated flood management ap-proaches at the basin-scale are needed. However, large-scale flood risk assessment methods for areas of several 10,000 km² are still in early stages. Traditional flood risk assessments are performed reach wise, assuming constant probabilities for the entire reach or basin. This might be helpful on a local basis, but where large-scale patterns are important this approach is of limited use. Assuming a T-year flood (e.g. 100 years) for the entire river network is unrealistic and would lead to an overestimation of flood risk at the large scale. Due to the lack of damage data, additionally, the probability of peak discharge or rainfall is usually used as proxy for damage probability to derive flood risk. With a continuous and long term simulation of the entire flood risk chain, the spatial variability of probabilities could be consider and flood risk could be directly derived from damage data in a consistent way.
The objective of this study is the development and application of a full flood risk chain, appropriate for the large scale and based on long term and continuous simulation. The novel approach of ‘derived flood risk based on continuous simulations’ is introduced, where the synthetic discharge time series is used as input into flood impact models and flood risk is directly derived from the resulting synthetic damage time series.
The bottleneck at this scale is the hydrodynamic simu-lation. To find suitable hydrodynamic approaches for the large-scale a benchmark study with simplified 2D hydrodynamic models was performed. A raster-based approach with inertia formulation and a relatively high resolution of 100 m in combination with a fast 1D channel routing model was chosen.
To investigate the suitability of the continuous simulation of a full flood risk chain for the large scale, all model parts were integrated into a new framework, the Regional Flood Model (RFM). RFM consists of the hydrological model SWIM, a 1D hydrodynamic river network model, a 2D raster based inundation model and the flood loss model FELMOps+r. Subsequently, the model chain was applied to the Elbe catchment, one of the largest catchments in Germany. For the proof-of-concept, a continuous simulation was per-formed for the period of 1990-2003. Results were evaluated / validated as far as possible with available observed data in this period. Although each model part introduced its own uncertainties, results and runtime were generally found to be adequate for the purpose of continuous simulation at the large catchment scale.
Finally, RFM was applied to a meso-scale catchment in the east of Germany to firstly perform a flood risk assessment with the novel approach of ‘derived flood risk assessment based on continuous simulations’. Therefore, RFM was driven by long term synthetic meteorological input data generated by a weather generator. Thereby, a virtual time series of climate data of 100 x 100 years was generated and served as input to RFM providing subsequent 100 x 100 years of spatially consistent river discharge series, inundation patterns and damage values. On this basis, flood risk curves and expected annual damage could be derived directly from damage data, providing a large-scale picture of flood risk. In contrast to traditional flood risk analysis, where homogenous return periods are assumed for the entire basin, the presented approach provides a coherent large-scale picture of flood risk. The spatial variability of occurrence probability is respected. Additionally, data and methods are consistent. Catchment and floodplain processes are repre-sented in a holistic way. Antecedent catchment conditions are implicitly taken into account, as well as physical processes like storage effects, flood attenuation or channel–floodplain interactions and related damage influencing effects. Finally, the simulation of a virtual period of 100 x 100 years and consequently large data set on flood loss events enabled the calculation of flood risk directly from damage distributions. Problems associated with the transfer of probabilities in rainfall or peak runoff to probabilities in damage, as often used in traditional approaches, are bypassed.
RFM and the ‘derived flood risk approach based on continuous simulations’ has the potential to provide flood risk statements for national planning, re-insurance aspects or other questions where spatially consistent, large-scale assessments are required.