Refine
Has Fulltext
- no (3178) (remove)
Year of publication
Document Type
- Article (2820)
- Doctoral Thesis (179)
- Other (64)
- Review (52)
- Monograph/Edited Volume (33)
- Preprint (17)
- Habilitation Thesis (6)
- Part of a Book (3)
- Conference Proceeding (3)
- Moving Images (1)
Is part of the Bibliography
- yes (3178)
Keywords
- Holocene (36)
- climate change (33)
- Climate change (21)
- Earthquake source observations (21)
- permafrost (19)
- Himalaya (18)
- Pollen (18)
- Tibetan Plateau (18)
- erosion (18)
- Seismicity and tectonics (16)
Institute
- Institut für Geowissenschaften (3178) (remove)
A large landslide (frozen debris avalanche) occurred at Assapaat on the south coast of the Nuussuaq Peninsula in Central West Greenland on June 13, 2021, at 04:04 local time. We present a compilation of available data from field observations, photos, remote sensing, and seismic monitoring to describe the event. Analysis of these data in combination with an analysis of pre- and post-failure digital elevation models results in the first description of this type of landslide. The frozen debris avalanche initiated as a 6.9 * 10(6) m(3) failure of permafrozen talus slope and underlying colluvium and till at 600-880 m elevation. It entrained a large volume of permafrozen colluvium along its 2.4 km path in two subsequent entrainment phases accumulating a total volume between 18.3 * 10(6) and 25.9 * 10(6) m(3). About 3.9 * 10(6) m(3) is estimated to have entered the Vaigat strait; however, no tsunami was reported, or is evident in the field. This is probably because the second stage of entrainment along with a flattening of slope angle reduced the mobility of the frozen debris avalanche. We hypothesise that the initial talus slope failure is dynamically conditioned by warming of the ice matrix that binds the permafrozen talus slope. When the slope ice temperature rises to a critical level, its shear resistance is reduced, resulting in an unstable talus slope prone to failure. Likewise, we attribute the large-scale entrainment to increasing slope temperature and take the frozen debris avalanche as a strong sign that the permafrost in this region is increasingly at a critical state. Global warming is enhanced in the Arctic and frequent landslide events in the past decade in Western Greenland let us hypothesise that continued warming will lead to an increase in the frequency and magnitude of these types of landslides. Essential data for critical arctic slopes such as precipitation, snowmelt, and ground and surface temperature are still missing to further test this hypothesis. It is thus strongly required that research funds are made available to better predict the change of landslide threat in the Arctic.
Lakes are dominant and diverse landscape features in the Arctic, but conventional land cover classification schemes typically map them as a single uniform class. Here, we present a detailed lake-centric geospatial database for an Arctic watershed in northern Alaska. We developed a GIS dataset consisting of 4362 lakes that provides information on lake morphometry, hydrologic connectivity, surface area dynamics, surrounding terrestrial ecotypes, and other important conditions describing Arctic lakes. Analyzing the geospatial database relative to fish and bird survey data shows relations to lake depth and hydrologic connectivity, which are being used to guide research and aid in the management of aquatic resources in the National Petroleum Reserve in Alaska. Further development of similar geospatial databases is needed to better understand and plan for the impacts of ongoing climate and land-use changes occurring across lake-rich landscapes in the Arctic.
We present a new autoclave that enables in situ characterization of hydrothermal fluids at high pressures and high temperatures at synchrotron x-ray radiation sources. The autoclave has been specifically designed to enable x-ray absorption spectroscopy in fluids with applications to mineral solubility and element speciation analysis in hydrothermal fluids in complex compositions. However, other applications, such as Raman spectroscopy, in high-pressure fluids are also possible with the autoclave. First experiments were run at pressures between 100 and 600 bars and at temperatures between 25 degrees C and 550 degrees C, and preliminary results on scheelite dissolution in fluids of different compositions show that the autoclave is well suited to study the behavior of ore-forming metals at P-T conditions relevant to the Earth's crust.
A hydrochemical approach to quantify the role of return flow in a surface flow-dominated catchment
(2017)
Stormflow generation in headwater catchments dominated by subsurface flow has been studied extensively, yet catchments dominated by surface flow have received less attention. We addressed this by testing whether stormflow chemistry is controlled by either (a) the event-water signature of overland flow, or (b) the pre-event water signature of return flow. We used a high-resolution hydrochemical data set of stormflow and end-members of multiple storms in an end-member mixing analysis to determine the number of end-members needed to explain stormflow, characterize and identify potential end-members, calculate their contributions to stormflow, and develop a conceptual model of stormflow. The arrangement and relative positioning of end-members in stormflow mixing space suggest that saturation excess overland flow (26-48%) and return flow from two different subsurface storage pools (17-53%) are both similarly important for stormflow. These results suggest that pipes and fractures are important flow paths to rapidly release stored water and highlight the value of within-event resolution hydrochemical data to assess the full range and dynamics of flow paths.
Lacustrine sediments have been widely used to investigate past climatic and environmental changes on millennial to seasonal time scales. Sedimentary archives of lakes in mountainous regions may also record non-climatic events such as earthquakes. We argue herein that a set of 64 annual laminae couplets reconciles a stratigraphically inconsistent accelerator mass spectrometry (AMS) C-14 chronology in a similar to 4-m-long sediment core from Lake Mengda, in the north-eastern Tibetan Plateau. The laminations suggest the lake was formed by a large landslide, triggered by the 1927 Gulang earthquake (M = 8.0). The lake sediment sequence can be separated into three units based on lithologic, sedimentary, and isotopic characteristics. Starting from the bottom of the sequence, these are: (1) unweathered, coarse, sandy valley-floor deposits or landslide debris that pre-date the lake, (2) landslide-induced, fine-grained soil or reworked landslide debris with a high organic content, and (3) lacustrine sediments with low organic content and laminations. These annual laminations provide a high-resolution record of anthropogenic and environmental changes during the twentieth century, recording enhanced sediment input associated with two phases of construction activities. The high mean sedimentation rates of up to 4.8 mm year(-1) underscore the potential for reconstructing such distinct sediment pulses in remote, forested, and seemingly undisturbed mountain catchments.
A comprehensive hydrometeorological dataset is presented spanning the period 1 January 201131 December 2014 to improve the understanding of the hydrological processes leading to flash floods and the relation between rainfall, runoff, erosion and sediment transport in a mesoscale catchment (Auzon, 116 km(2)) of the Mediterranean region. Badlands are present in the Auzon catchment and well connected to high-gradient channels of bedrock rivers which promotes the transfer of suspended solids downstream. The number of observed variables, the various sensors involved (both in situ and remote) and the space-time resolution (similar to km(2), similar to min) of this comprehensive dataset make it a unique contribution to research communities focused on hydrometeorology, surface hydrology and erosion. Given that rainfall is highly variable in space and time in this region, the observation system enables assessment of the hydrological response to rainfall fields. Indeed, (i) rainfall data are provided by rain gauges (both a research network of 21 rain gauges with a 5 min time step and an operational network of 10 rain gauges with a 5 min or 1 h time step), S-band Doppler dual-polarization radars (1 km(2), 5 min resolution), disdrometers (16 sensors working at 30 s or 1 min time step) and Micro Rain Radars (5 sensors, 100m height resolution). Additionally, during the special observation period (SOP-1) of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) project, two X-band radars provided precipitation measurements at very fine spatial and temporal scales (1 ha, 5 min). (ii) Other meteorological data are taken from the operational surface weather observation stations of Meteo-France (including 2m air temperature, atmospheric pressure, 2 m relative humidity, 10m wind speed and direction, global radiation) at the hourly time resolution (six stations in the region of interest). (iii) The monitoring of surface hydrology and suspended sediment is multi-scale and based on nested catchments. Three hydrometric stations estimate water discharge at a 2-10 min time resolution. Two of these stations also measure additional physico-chemical variables (turbidity, temperature, conductivity) and water samples are collected automatically during floods, allowing further geochemical characterization of water and suspended solids. Two experimental plots monitor overland flow and erosion at 1 min time resolution on a hillslope with vineyard. A network of 11 sensors installed in the intermittent hydrographic network continuously measures water level and water temperature in headwater subcatchments (from 0.17 to 116 km(2)) at a time resolution of 2-5 min. A network of soil moisture sensors enables the continuous measurement of soil volumetric water content at 20 min time resolution at 9 sites. Additionally, concomitant observations (soil moisture measurements and stream gauging) were performed during floods between 2012 and 2014. Finally, this dataset is considered appropriate for understanding the rainfall variability in time and space at fine scales, improving areal rainfall estimations and progressing in distributed hydrological and erosion modelling.
Reliable hydrological monitoring is the basis for sound water management in drained wetlands. Since statistical methods cannot be employed for unobserved or sparsely monitored areas, the primary design (first set-up) may be arbitrary in most instances. The objective of this paper is therefore to provide a guideline for designing the initial hydrological monitoring network. A scheme is developed that handles different parts of monitoring and hydrometry in wetlands, focusing on the positioning of surface water and groundwater gauges. For placement of the former, control units are used which correspond to areas whose water levels can be regulated separately. The latter are arranged depending on hydrological response units, defined by combinations of soil type and land use, and the chosen surface water monitoring sites. A practical application of the approach is shown for an investigation area in the Spreewald region in north-east Germany. The presented scheme leaves a certain degree of freedom to its user, allowing the inclusion of expert knowledge or special concerns. Based on easily obtainable data, the developed hydrological network serves as a first step in the iterative procedure of monitoring network optimisation. Copyright (c) 2013 John Wiley & Sons, Ltd.
Growing attention to phytoplankton mixotrophy as a trophic strategy has led to significant revisions of traditional pelagic food web models and ecosystem functioning. Although some empirical estimates of mixotrophy do exist, a much broader set of in situ measurements are required to (i) identify which organisms are acting as mixotrophs in real time and to (ii) assess the contribution of their heterotrophy to biogeochemical cycling. Estimates are needed through time and across space to evaluate which environmental conditions or habitats favour mixotrophy: conditions still largely unknown. We review methodologies currently available to plankton ecologists to undertake estimates of plankton mixotrophy, in particular nanophytoplankton phago-mixotrophy. Methods are based largely on fluorescent or isotopic tracers, but also take advantage of genomics to identify phylotypes and function. We also suggest novel methods on the cusp of use for phago-mixotrophy assessment, including single-cell measurements improving our capacity to estimate mixotrophic activity and rates in wild plankton communities down to the single-cell level. Future methods will benefit from advances in nanotechnology, micromanipulation and microscopy combined with stable isotope and genomic methodologies. Improved estimates of mixotrophy will enable more reliable models to predict changes in food web structure and biogeochemical flows in a rapidly changing world.
A ground motion logic tree for seismic hazard analysis in the stable cratonic region of Europe
(2020)
Regions of low seismicity present a particular challenge for probabilistic seismic hazard analysis when identifying suitable ground motion models (GMMs) and quantifying their epistemic uncertainty. The 2020 European Seismic Hazard Model adopts a scaled backbone approach to characterise this uncertainty for shallow seismicity in Europe, incorporating region-to-region source and attenuation variability based on European strong motion data. This approach, however, may not be suited to stable cratonic region of northeastern Europe (encompassing Finland, Sweden and the Baltic countries), where exploration of various global geophysical datasets reveals that its crustal properties are distinctly different from the rest of Europe, and are instead more closely represented by those of the Central and Eastern United States. Building upon the suite of models developed by the recent NGA East project, we construct a new scaled backbone ground motion model and calibrate its corresponding epistemic uncertainties. The resulting logic tree is shown to provide comparable hazard outcomes to the epistemic uncertainty modelling strategy adopted for the Eastern United States, despite the different approaches taken. Comparison with previous GMM selections for northeastern Europe, however, highlights key differences in short period accelerations resulting from new assumptions regarding the characteristics of the reference rock and its influence on site amplification.
Deep hydrothermal Mo, W, and base metal mineralization at the Sweet Home mine (Detroit City portal) formed in response to magmatic activity during the Oligocene. Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite suggest that the early-stage mineralization at the Sweet Home mine precipitated from low- to medium-salinity (1.5-11.5 wt% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415 degrees C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by delta H-2(w)-delta O-18(w) relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home mine was triggered by a deep-seated magmatic intrusion. The findings of this study are in good agreement with the results of previous fluid inclusion studies of the mineralization of the Sweet Home mine and from Climax-type Mo porphyry deposits in the Colorado Mineral Belt.
Advances in the field of seismic interferometry have provided a basic theoretical interpretation to the full spectrum of the microtremor horizontal-to-vertical spectral ratio [H/V(f)]. The interpretation has been applied to ambient seismic noise data recorded both at the surface and at depth. The new algorithm, based on the diffuse wavefield assumption, has been used in inversion schemes to estimate seismic wave velocity profiles that are useful input information for engineering and exploration seismology both for earthquake hazard estimation and to characterize surficial sediments. However, until now, the developed algorithms are only suitable for on land environments with no offshore consideration. Here, the microtremor H/V(z, f) modelling is extended for applications to marine sedimentary environments for a 1-D layered medium. The layer propagator matrix formulation is used for the computation of the required Green’s functions. Therefore, in the presence of a water layer on top, the propagator matrix for the uppermost layer is defined to account for the properties of the water column. As an application example we analyse eight simple canonical layered earth models. Frequencies ranging from 0.2 to 50 Hz are considered as they cover a broad wavelength interval and aid in practice to investigate subsurface structures in the depth range from a few meters to a few hundreds of meters. Results show a marginal variation of 8 per cent at most for the fundamental frequency when a water layer is present. The water layer leads to variations in H/V peak amplitude of up to 50 per cent atop the solid layers.
The present study proposes a General Probabilistic Framework (GPF) for uncertainty and global sensitivity analysis of deterministic models in which, in addition to scalar inputs, non-scalar and correlated inputs can be considered as well. The analysis is conducted with the variance-based approach of Sobol/Saltelli where first and total sensitivity indices are estimated. The results of the framework can be used in a loop for model improvement, parameter estimation or model simplification. The framework is applied to SWAP, a 113 hydrological model for the transport of water, solutes and heat in unsaturated and saturated soils. The sources of uncertainty are grouped in five main classes: model structure (soil discretization), input (weather data), time-varying (crop) parameters, scalar parameters (soil properties) and observations (measured soil moisture). For each source of uncertainty, different realizations are created based on direct monitoring activities. Uncertainty of evapotranspiration, soil moisture in the root zone and bottom fluxes below the root zone are considered in the analysis. The results show that the sources of uncertainty are different for each output considered and it is necessary to consider multiple output variables for a proper assessment of the model. Improvements on the performance of the model can be achieved reducing the uncertainty in the observations, in the soil parameters and in the weather data. Overall, the study shows the capability of the GPF to quantify the relative contribution of the different sources of uncertainty and to identify the priorities required to improve the performance of the model. The proposed framework can be extended to a wide variety of modelling applications, also when direct measurements of model output are not available.
A 741-cm-long laminated sediment core, covering the last 10,800 years was collected from Lake Zigetang, central Tibetan Plateau (90.9 degrees E, 32.0 degrees N, 4560m a.s.l.), and analysed palynologically at 69 horizons. Biome reconstruction suggests a dominance of temperate steppe vegetation (mainly Artemisia and Poaceae) on the central Tibetan Plateau during the first half of the Holocene (10.8-4.4 cal. ka BP), while alpine steppes with desert elements (mainly Cyperaceae, Poaceae, Chenopodiaceae, and characteristic high-alpine herb families) tend to dominate the second half (4.4-0 cal. ka BP). The Artemisia/Cyperaceae ratio-a semi-quantitative measure for summer temperature-indicates a general cooling trend throughout the Holocene. Dense temperate steppe vegetation and maximum desert plant withdrawal, however, indicate that a suitable balance of wet and warm conditions for optimum vegetation growth likely occurred during the middle Holocene (7.3-4.4 cal. ka BP). Severe Early Holocene cold events have been reconstructed for 8.7-8.3 and similar to 7.4 cal. ka BP. (c) 2006 Elsevier Ltd and INQUA. All rights reserved.
In this paper we evaluate different methods to predict soil erosion processes. We derived different layers of predictor variables for the study area in the Northern Chianti, Italy, describing the soil-lithologic complex, land use, and topographic characteristics. For a subcatchment of the Orme River, we mapped erosion processes by interpreting aerial photographs and field observations. These were classified as erosional response units (ERU), i.e. spatial areas of homogeneous erosion processes. The ERU were used as the response variable in the soil erosion modelling process. We applied two models i) bootstrap aggregation (Random Forest: RF), and ii) stochastic gradient boosting (TreeNet: TN) to predict the potential spatial distribution of erosion processes for the entire Orme River catchment. The models are statistically evaluated using training data and a set of performance parameters such as the area under the receiver operating characteristic curve (AUC), Cohen's Kappa, and pseudo R2. Variable importance and response curves provide further insight into controlling factors of erosion. Both models provided good performance in terms of classification and calibration; however, TN outperformed RF. Similar classes such as active and inactive landslides can be discriminated and well interpreted by considering response curves and relative variable importance. The spatial distribution of the predicted erosion susceptibilities generally follows topographic constraints and is similar for both models. Hence, the model-based delineation of ERU on the basis of soil and terrain information is a valuable tool in geomorphology; it provides insights into factors controlling erosion processes and may allow the extrapolation and prediction of erosion processes in unsurveyed areas.
The application of nanoscale zero-valent iron (nZVI) for subsurface remediation of groundwater contaminants is a promising new technology, which can be understood as alternative to the permeable reactive barrier technique using granular iron. Dechlorination of organic contaminants by zero-valent iron seems promising. Currently, one limitation to widespread deployment is the fast agglomeration and sedimentation of nZVI in colloidal suspensions, even more so when in soils and sediments, which limits the applicability for the treatment of sources and plumes of contamination. Colloid-supported nZVI shows promising characteristics to overcome these limitations. Mobility of Carbo-Iron Colloids (CIC) - a newly developed composite material based on finely ground activated carbon as a carrier for nZVI - was tested in a field application: In this study, a horizontal dipole flow field was established between two wells separated by 53 m in a confined, natural aquifer. The injection/extraction rate was 500 L/h. Approximately 12 kg of CIC was suspended with the polyanionic stabilizer carboxymethyl cellulose. The suspension was introduced into the aquifer at the injection well. Breakthrough of CIC was observed visually and based on total particle and iron concentrations detected in samples from the extraction well. Filtration of water samples revealed a particle breakthrough of about 12% of the amount introduced. This demonstrates high mobility of CIC particles and we suggest that nZVI carried on CIC can be used for contaminant plume remediation by in-situ formation of reactive barriers. (C) 2015 Elsevier B.V. All rights reserved.
A fast and sensitive method for the continuous determination of methane (CH4) and its stable carbon isotopic values (delta C-13-CH4) in surface waters was developed by applying a vacuum to a gas/liquid exchange membrane and measuring the extracted gases by a portable cavity ring-down spectroscopy analyser (M-CRDS). The M-CRDS was calibrated and characterized for CH4 concentration and delta C-13-CH4 with synthetic water standards. The detection limit of the M-CRDS for the simultaneous determination of CH4 and delta C-13-CH4 is 3.6 nmol L-1 CH4. A measurement precision of CH4 concentrations and delta C-13-CH4 in the range of 1.1%, respectively, 1.7 parts per thousand (1 sigma) and accuracy (1.3%, respectively, 0.8 parts per thousand [1 sigma]) was achieved for single measurements and averaging times of 10 min. The response time tau of 57 +/- 5 s allow determination of delta C-13-CH4 values more than twice as fast than other methods. The demonstrated M-CRDS method was applied and tested for Lake Stechlin (Germany) and compared with the headspace-gas chromatography and fast membrane CH4 concentration methods. Maximum CH4 concentrations (577 nmol L-1) and lightest delta C-13-CH4 (-35.2 parts per thousand) were found around the thermocline in depth profile measurements. The M-CRDS-method was in good agreement with other methods. Temporal variations in CH4 concentration and delta C-13-CH4 obtained in 24 h measurements indicate either local methane production/oxidation or physical variations in the thermocline. Therefore, these results illustrate the need of fast and sensitive analyses to achieve a better understanding of different mechanisms and pathways of CH4 formation in aquatic environments.
Past climatic change can be reconstructed from sedimentary archives by a number of proxies. However, few methods exist to directly estimate hydrological changes and even fewer result in quantitative data, impeding our understanding of the timing, magnitude and mechanisms of hydrological changes. Here we present a novel approach based on delta H-2 values of sedimentary lipid biomarkers in combination with plant physiological modeling to extract quantitative information on past changes in relative humidity. Our initial application to an annually laminated lacustrine sediment sequence from western Europe deposited during the Younger Dryas cold period revealed relative humidity changes of up to 15% over sub-centennial timescales, leading to major ecosystem changes, in agreement with palynological data from the region. We show that by combining organic geochemical methods and mechanistic plant physiological models on well characterized lacustrine archives it is possible to extract quantitative ecohydrological parameters from sedimentary lipid biomarker delta H-2 data.
The molecular biomarker composition of two sediment cores from Sanabria Lake (NW Iberian Peninsula) and a survey of modern plants in the watershed provide a reconstruction of past vegetation and landscape dynamics since deglaciation. During a proglacial stage in Lake Sanabria (prior to 14.7 cal ka BP), very low biomarker concentration and carbon preference index (CPI) values similar to 1 suggest that the n-alkanes could have derived from eroded ancient sediment sources or older organic matter with high degree of maturity. During the Late glacial (14.7-11.7 cal ka BP) and the Holocene (last 11.7 cal ka BP) intervals with higher biomarker and triterpenoid concentrations (high %nC(29) , nC(31) alkanes), higher CPI and average carbon length (ACL), and lower P-aq (proportion of aquatic plants) are indicative of major contribution of vascular land plants from a more forested watershed (e.g. Mid Holocene period 7.0-4.0 cal ka BP). Lower biomarker concentrations (high %nC(27) alkanes), CPI and ACL values responded to short phases with decreased allochthonous contribution into the lake that correspond to centennial-scale periods of regional forest decline (e.g. 4-3 ka BP, Roman deforestation after 2.0 ka, and some phases of the LIA, seventeenth-nineteenth centuries). Human activities in the watershed were significant during early medieval times (1.3-1.0 cal ka BP) and since 1960 CE, in both cases associated with relatively higher productivity stages in the lake (lower biomarker and triterpenoid concentrations, high %nC(23) and %nC(31) respectively, lower ACL and CPI values and higher P-aq). The lipid composition of Sanabria Lake sediments indicates a major allochthonous (watershed-derived) contribution to the organic matter budget since deglaciation, and a dominant oligotrophic status during the lake history. The study constrains the climate and anthropogenic forcings and watershed versus lake sources in organic matter accumulation processes and helps to design conservation and management policies in mountain, oligotrophic lakes.
Earthquake localization is both a necessity within the field of seismology, and a prerequisite for further analysis such as source studies and hazard assessment. Traditional localization methods often rely on manually picked phases. We present an alternative approach using deep learning that once trained can predict hypocenter locations efficiently. In seismology, neural networks have typically been trained with either single-station records or based on features that have been extracted previously from the waveforms. We use three-component full-waveform records of multiple stations directly. This means no information is lost during preprocessing and preparation of the data does not require expert knowledge. The first convolutional layer of our deep convolutional neural network (CNN) becomes sensitive to features that characterize the waveforms it is trained on. We show that this layer can therefore additionally be used as an event detector. As a test case, we trained our CNN using more than 2000 earthquake swarm events from West Bohemia, recorded by nine local three-component stations. The CNN successfully located 908 validation events with standard deviations of 56.4 m in east-west, 123.8 m in north-south, and 136.3 m in vertical direction compared to a double-difference relocated reference catalog. The detector is sensitive to events with magnitudes down to M-L = -0.8 with 3.5% false positive detections.
We present an experimental approach to study the three-dimensional microstructure of gas diffusion layer (GDL) materials under realistic compression conditions. A dedicated compression device was designed that allows for synchrotron-tomographic investigation of circular samples under well-defined compression conditions. The tomographic data provide the experimental basis for stochastic modeling of nonwoven GDL materials. A plain compression tool is used to study the fiber courses in the material at different compression stages. Transport relevant geometrical parameters, such as porosity, pore size, and tortuosity distributions, are exemplarily evaluated for a GDL sample in the uncompressed state and for a compression of 30 vol.%. To mimic the geometry of the flow-field, we employed a compression punch with an integrated channel-rib-profile. It turned out that the GDL material is homogeneously compressed under the ribs, however, much less compressed underneath the channel. GDL fibers extend far into the channel volume where they might interfere with the convective gas transport and the removal of liquid water from the cell. (C) 2015 AIP Publishing LLC.
In this study we investigate a dayside, midlatitude plasma depletion (DMLPD) encountered on 22 May 2014 by the Swarm and GRACE satellites, as well as ground-based instruments. The DMLPD was observed near Puerto Rico by Swarm near 10 LT under quiet geomagnetic conditions at altitudes of 475-520 km and magnetic latitudes of similar to 25 degrees-30 degrees. The DMLPD was also revealed in total electron content observations by the Saint Croix station and by the GRACE satellites (430 km) near 16 LT and near the same geographic location. The unique Swarm constellation enables the horizontal tilt of the DMLPD to be measured (35 degrees clockwise from the geomagnetic east-west direction). Ground-based airglow images at Arecibo showed no evidence for plasma density depletions during the night prior to this dayside event. The C/NOFS equatorial satellite showed evidence for very modest plasma density depletions that had rotated into the morningside from nightside. However, the equatorial depletions do not appear related to the DMLPD, for which the magnetic apex height is about 2500 km. The origins of the DMLPD are unknown, but may be related to gravity waves.
Compound natural hazards likeEl Ninoevents cause high damage to society, which to manage requires reliable risk assessments. Damage modelling is a prerequisite for quantitative risk estimations, yet many procedures still rely on expert knowledge, and empirical studies investigating damage from compound natural hazards hardly exist. A nationwide building survey in Peru after theEl Ninoevent 2017 - which caused intense rainfall, ponding water, flash floods and landslides - enables us to apply data-mining methods for statistical groundwork, using explanatory features generated from remote sensing products and open data. We separate regions of different dominant characteristics through unsupervised clustering, and investigate feature importance rankings for classifying damage via supervised machine learning. Besides the expected effect of precipitation, the classification algorithms select the topographic wetness index as most important feature, especially in low elevation areas. The slope length and steepness factor ranks high for mountains and canyons. Partial dependence plots further hint at amplified vulnerability in rural areas. An example of an empirical damage probability map, developed with a random forest model, is provided to demonstrate the technical feasibility.
Protection of natural or semi-natural ecosystems is an important part of societal strategies for maintaining biodiversity, ecosystem services, and achieving overall sustainable development. The assessment of multiple emerging land use trade-offs is complicated by the fact that land use changes occur and have consequences at local, regional, and even global scale. Outcomes also depend on the underlying socio-economic trends. We apply a coupled, multi-scale modelling system to assess an increase in nature protection areas as a key policy option in the European Union (EU). The main goal of the analysis is to understand the interactions between policy-induced land use changes across different scales and sectors under two contrasting future socio-economic pathways. We demonstrate how complementary insights into land system change can be gained by coupling land use models for agriculture, forestry, and urban areas for Europe, in connection with other world regions. The simulated policy case of nature protection shows how the allocation of a certain share of total available land to newly protected areas, with specific management restrictions imposed, may have a range of impacts on different land-based sectors until the year 2040. Agricultural land in Europe is slightly reduced, which is partly compensated for by higher management intensity. As a consequence of higher costs, total calorie supply per capita is reduced within the EU. While wood harvest is projected to decrease, carbon sequestration rates increase in European forests. At the same time, imports of industrial roundwood from other world regions are expected to increase. Some of the aggregate effects of nature protection have very different implications at the local to regional scale in different parts of Europe. Due to nature protection measures, agricultural production is shifted from more productive land in Europe to on average less productive land in other parts of the world. This increases, at the global level, the allocation of land resources for agriculture, leading to a decrease in tropical forest areas, reduced carbon stocks, and higher greenhouse gas emissions outside of Europe. The integrated modelling framework provides a method to assess the land use effects of a single policy option while accounting for the trade-offs between locations, and between regional, European, and global scales.
Monsoon systems around the world are governed by the so-called moisture-advection feedback. Here we show that, in a minimal conceptual model, this feedback implies a critical threshold with respect to the atmospheric specific humidity q(o) over the ocean adjacent to the monsoon region. If q(o) falls short of this critical value q(o)(c), monsoon rainfall over land cannot be sustained. Such a case could occur if evaporation from the ocean was reduced, e.g. due to low sea surface temperatures. Within the restrictions of the conceptual model, we estimate q(o)(c) from present-day reanalysis data for four major monsoon systems, and demonstrate how this concept can help understand abrupt variations in monsoon strength on orbital timescales as found in proxy records.
The Upper Cretaceous (Campanian-Maastrichtian) bioclastic wedge of the Orfento Formation in the Montagna della Maiella, Italy, is compared to newly discovered contourite drifts in the Maldives. Like the drift deposits in the Maldives, the Orfento Formation fills a channel and builds a Miocene delta-shaped and mounded sedimentary body in the basin that is similar in size to the approximately 350 km(2) large coarse-grained bioclastic Miocene delta drifts in the Maldives. The composition of the bioclastic wedge of the Orfento Formation is also exclusively bioclastic debris sourced from the shallow-water areas and reworked clasts of the Orfento Formation itself. In the near mud-free succession, age-diagnostic fossils are sparse. The depositional textures vary from wackestone to float-rudstone and breccia/conglomerates, but rocks with grainstone and rudstone textures are the most common facies. In the channel, lensoid convex-upward breccias, cross-cutting channelized beds and thick grainstone lobes with abundant scours indicate alternating erosion and deposition from a high-energy current. In the basin, the mounded sedimentary body contains lobes with a divergent progradational geometry. The lobes are built by decametre thick composite megabeds consisting of sigmoidal clinoforms that typically have a channelized topset, a grainy foreset and a fine-grained bottomset with abundant irregular angular clasts. Up to 30 m thick channels filled with intraformational breccias and coarse grainstones pinch out downslope between the megabeds. In the distal portion of the wedge, stacked grainstone beds with foresets and reworked intraclasts document continuous sediment reworking and migration. The bioclastic wedge of the Orfento Formation has been variously interpreted as a succession of sea-level controlled slope deposits, a shoaling shoreface complex, or a carbonate tidal delta. Current-controlled delta drifts in the Maldives, however, offer a new interpretation because of their similarity in architecture and composition. These similarities include: (i) a feeder channel opening into the basin; (ii) an excavation moat at the exit of the channel; (iii) an overall mounded geometry with an apex that is in shallower water depth than the source channel; (iv) progradation of stacked lobes; (v) channels that pinch out in a basinward direction; and (vi) smaller channelized intervals that are arranged in a radial pattern. As a result, the Upper Cretaceous (Campanian-Maastrichtian) bioclastic wedge of the Orfento Formation in the Montagna della Maiella, Italy, is here interpreted as a carbonate delta drift.
The literature contains a sizable number of publications where weather types are used to decompose climate shifts or trends into contributions of frequency and mean of those types. They are all based on the product rule, that is, a transformation of a product of sums into a sum of products, the latter providing the decomposition. While there is nothing to argue about the transformation itself, its interpretation as a climate shift or trend decomposition is bound to fail. While the case of a climate shift may be viewed as an incomplete description of a more complex behaviour, trend decomposition indeed produces bogus trends, as demonstrated by a synthetic counterexample with well-defined trends in type frequency and mean. Consequently, decompositions based on that transformation, be it for climate shifts or trends, must not be used.
Seismicity models are probabilistic forecasts of earthquake rates to support seismic hazard assessment.
Physics-based models allow extrapolating previously unsampled parameter ranges and enable conclusions on underlying tectonic or human-induced processes.
The Coulomb Failure (CF) and the rate-and-state (RS) models are two widely used physics-based seismicity models both assuming pre-existing populations of faults responding to Coulomb stress changes.
The CF model depends on the absolute Coulomb stress and assumes instantaneous triggering if stress exceeds a threshold, while the RS model only depends on stress changes.
Both models can predict background earthquake rates and time-dependent stress effects, but the RS model with its three independent parameters can additionally explain delayed aftershock triggering.
This study introduces a modified CF model where the instantaneous triggering is replaced by a mean time-to-failure depending on the absolute stress value.
For the specific choice of an exponential dependence on stress and a stationary initial seismicity rate, we show that the model leads to identical results as the RS model and reproduces the Omori-Utsu relation for aftershock decays as well stress-shadowing effects.
Thus, both CF and RS models can be seen as special cases of the new model. However, the new stress response model can also account for subcritical initial stress conditions and alternative functions of the mean time-to-failure depending on the problem and fracture mode.
Design flood estimation is an essential part of flood risk assessment. Commonly applied are flood frequency analyses and design storm approaches, while the derived flood frequency using continuous simulation has been getting more attention recently. In this study, a continuous hydrological modelling approach on an hourly time scale, driven by a multi-site weather generator in combination with a -nearest neighbour resampling procedure, based on the method of fragments, is applied. The derived 100-year flood estimates in 16 catchments in Vorarlberg (Austria) are compared to (a) the flood frequency analysis based on observed discharges, and (b) a design storm approach. Besides the peak flows, the corresponding runoff volumes are analysed. The spatial dependence structure of the synthetically generated flood peaks is validated against observations. It can be demonstrated that the continuous modelling approach can achieve plausible results and shows a large variability in runoff volume across the flood events.
A confocal set-up is presented that improves micro-XRF and XAFS experiment with high-pressure e diamond-anvil cells (DACs) In this experiment a probing volume is defined by the focus of the incoming synchrotron radiation beam and that of a polycapillary X-ray half-lens with a very long working distance, which is placed in front of the fluorescence detector This set-up enhances the quality of the fluorescence and XAFS spectra, and thus the sensitivity for detecting elements at low concentrations. It efficiently suppresses signal from outside the sample chamber, which stems from elastic and inelastic scattering of the incoming beam by the diamond anvils as well as from excitation of fluorescence from the body of the DAC
Land-use concepts provide decision support for the most efficient usage options according to sustainable development and multifunctionality requirements. However, developments in landscape-related, agricultural production schemes are primarily driven by economic benefits. Therefore, most agricultural land-use concepts tackle particular problems or interests and lack a systemic perspective. As a result, we discuss a conceptual model for future site-specific agricultural land-use with an inbuilt requirement for adequate experimental sites to enable monitoring systems for a new generation of ecosystem models and for new approaches to address science-stakeholder interactions.
To provide physically based wind modelling for wind erosion research at regional scale, a 3D computational fluid dynamics (CFD) wind model was developed. The model was programmed in C language based on the Navier-Stokes equations, and it is freely available as open source. Integrated with the spatial analysis and modelling tool (SAMT), the wind model has convenient input preparation and powerful output visualization. To validate the wind model, a series of experiments was conducted in a wind tunnel. A blocking inflow experiment was designed to test the performance of the model on simulation of basic fluid processes. A round obstacle experiment was designed to check if the model could simulate the influences of the obstacle on wind field. Results show that measured and simulated wind fields have high correlations, and the wind model can simulate both the basic processes of the wind and the influences of the obstacle on the wind field. These results show the high reliability of the wind model. A digital elevation model (DEM) of an area (3800 m long and 1700 m wide) in the Xilingele grassland in Inner Mongolia (autonomous region, China) was applied to the model, and a 3D wind field has been successfully generated. The clear implementation of the model and the adequate validation by wind tunnel experiments laid a solid foundation for the prediction and assessment of wind erosion at regional scale.
A comprehensive workflow to analyze ensembles of globally inverted 2D electrical resistivity models
(2022)
Electrical resistivity tomography (ERT) aims at imaging the subsurface resistivity distribution and provides valuable information for different geological, engineering, and hydrological applications. To obtain a subsurface resistivity model from measured apparent resistivities, stochastic or deterministic inversion procedures may be employed. Typically, the inversion of ERT data results in non-unique solutions; i.e., an ensemble of different models explains the measured data equally well. In this study, we perform inference analysis of model ensembles generated using a well-established global inversion approach to assess uncertainties related to the nonuniqueness of the inverse problem. Our interpretation strategy starts by establishing model selection criteria based on different statistical descriptors calculated from the data residuals. Then, we perform cluster analysis considering the inverted resistivity models and the corresponding data residuals. Finally, we evaluate model uncertainties and residual distributions for each cluster. To illustrate the potential of our approach, we use a particle swarm optimization (PSO) algorithm to obtain an ensemble of 2D layer-based resistivity models from a synthetic data example and a field data set collected in Loon-Plage, France. Our strategy performs well for both synthetic and field data and allows us to extract different plausible model scenarios with their associated uncertainties and data residual distributions. Although we demonstrate our workflow using 2D ERT data and a PSObased inversion approach, the proposed strategy is general and can be adapted to analyze model ensembles generated from other kinds of geophysical data and using different global inversion approaches.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
Accurate time series representation of paleoclimatic proxy records is challenging because such records involve dating errors in addition to proxy measurement errors. Rigorous attention is rarely given to age uncertainties in paleoclimatic research, although the latter can severely bias the results of proxy record analysis. Here, we introduce a Bayesian approach to represent layer-counted proxy records - such as ice cores, sediments, corals, or tree rings - as sequences of probability distributions on absolute, error-free time axes. The method accounts for both proxy measurement errors and uncertainties arising from layer-counting-based dating of the records. An application to oxygen isotope ratios from the North Greenland Ice Core Project (NGRIP) record reveals that the counting errors, although seemingly small, lead to substantial uncertainties in the final representation of the oxygen isotope ratios. In particular, for the older parts of the NGRIP record, our results show that the total uncertainty originating from dating errors has been seriously underestimated. Our method is next applied to deriving the overall uncertainties of the Suigetsu radiocarbon comparison curve, which was recently obtained from varved sediment cores at Lake Suigetsu, Japan. This curve provides the only terrestrial radiocarbon comparison for the time interval 12.5-52.8 kyr BP. The uncertainties derived here can be readily employed to obtain complete error estimates for arbitrary radiometrically dated proxy records of this recent part of the last glacial interval.
Situated in an active tectonic region, Santiago de Chile, the country's capital with more than six million inhabitants, faces tremendous earthquake risk. Macroseismic data for the 1985 Valparaiso event show large variations in the distribution of damage to buildings within short distances, indicating strong effects of local sediments on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity and a study was carried out aiming to estimate site amplification derived from horizontal-to- vertical (H/V) spectral ratios from earthquake data (EHV) and ambient noise (NHV), as well as using the standard spectral ratio (SSR) technique with a nearby reference station located on igneous rock. The results lead to the following conclusions: The analysis of earthquake data shows significant dependence on the local geological structure with respect to amplitude and duration. An amplification of ground motion at frequencies higher than the fundamental one can be found. This amplification would not be found when looking at NHV ratios alone. The analysis of NHV spectral ratios shows that they can only provide a lower bound in amplitude for site amplification. P-wave site responses always show lower amplitudes than those derived by S waves, and sometimes even fail to provide some frequencies of amplification. No variability in terms of time and amplitude is observed in the analysis of the H/V ratio of noise. Due to the geological conditions in some parts of the investigated area, the fundamental resonance frequency of a site is difficult to estimate following standard criteria proposed by the SESAME consortium, suggesting that these are too restrictive under certain circumstances.
The closed-chamber method is the most common approach to determine CH4 fluxes in peatlands. The concentration change in the chamber is monitored over time, and the flux is usually calculated by the slope of a linear regression function. Theoretically, the gas exchange cannot be constant over time but has to decrease, when the concentration gradient between chamber headspace and soil air decreases. In this study, we test whether we can detect this non- linearity in the concentration change during the chamber closure with six air samples. We expect generally a low concentration gradient on dry sites (hummocks) and thus the occurrence of exponential concentration changes in the chamber due to a quick equilibrium of gas concentrations between peat and chamber headspace. On wet (flarks) and sedge- covered sites (lawns), we expect a high gradient and near-linear concentration changes in the chamber. To evaluate these model assumptions, we calculate both linear and exponential regressions for a test data set (n = 597) from a Finnish mire. We use the Akaike Information Criterion with small sample second order bias correction to select the best-fitted model. 13.6%, 19.2% and 9.8% of measurements on hummocks, lawns and flarks, respectively, were best fitted with an exponential regression model. A flux estimation derived from the slope of the exponential function at the beginning of the chamber closure can be significantly higher than using the slope of the linear regression function. Non-linear concentration-overtime curves occurred mostly during periods of changing water table. This could be due to either natural processes or chamber artefacts, e.g. initial pressure fluctuations during chamber deployment. To be able to exclude either natural processes or artefacts as cause of non-linearity, further information, e.g. CH4 concentration profile measurements in the peat, would be needed. If this is not available, the range of uncertainty can be substantial. We suggest to use the range between the slopes of the exponential regression at the beginning and at the end of the closure time as an estimate of the overall uncertainty.
Flood loss data collection and modeling are not standardized, and previous work has indicated that losses from different flood types (e.g., riverine and groundwater) may follow different driving forces. However, different flood types may occur within a single flood event, which is known as a compound flood event. Therefore, we aimed to identify statistical similarities between loss-driving factors across flood types and test whether the corresponding losses should be modeled separately. In this study, we used empirical data from 4,418 respondents from four survey campaigns studying households in Germany that experienced flooding. These surveys sought to investigate several features of the impact process (hazard, socioeconomic, preparedness, and building characteristics, as well as flood type). While the level of most of these features differed across flood type subsamples (e.g., degree of preparedness), they did so in a nonregular pattern. A variable selection process indicates that besides hazard and building characteristics, information on property-level preparedness was also selected as a relevant predictor of the loss ratio. These variables represent information, which is rarely adopted in loss modeling. Models shall be refined with further data collection and other statistical methods. To save costs, data collection efforts should be steered toward the most relevant predictors to enhance data availability and increase the statistical power of results. Understanding that losses from different flood types are driven by different factors is a crucial step toward targeted data collection and model development and will finally clarify conditions that allow us to transfer loss models in space and time. <br /> Key Points <br /> Survey data of flood-affected households show different concurrent flood types, undermining the use of a single-flood-type loss model Thirteen variables addressing flood hazard, the building, and property level preparedness are significant predictors of the building loss ratio Flood type-specific models show varying significance across the predictor variables, indicating a hindrance to model transferability
Risk-based insurance is a commonly proposed and discussed flood risk adaptation mechanism in policy debates across the world such as in the United Kingdom and the United States of America. However, both risk-based premiums and growing risk pose increasing difficulties for insurance to remain affordable. An empirical concept of affordability is required as the affordability of adaption strategies is an important concern for policymakers, yet such a concept is not often examined. Therefore, a robust metric with a commonly acceptable affordability threshold is required. A robust metric allows for a previously normative concept to be quantified in monetary terms, and in this way, the metric is rendered more suitable for integration into public policy debates. This paper investigates the degree to which risk-based flood insurance premiums are unaffordable in Europe. In addition, this paper compares the outcomes generated by three different definitions of unaffordability in order to investigate the most robust definition. In doing so, the residual income definition was found to be the least sensitive to changes in the threshold. While this paper focuses on Europe, the selected definition can be employed elsewhere in the world and across adaption measures in order to develop a common metric for indicating the potential unaffordability problem.
The importance of cultural ecosystem services in agricultural landscapes is increasingly recognized as agricultural scale enlargement and abandonment affect aesthetic and recreational values of agricultural landscapes. Landscape preference studies addressing these type of values often yield context-specific outcomes, limiting the applicability of their outcomes in landscape policy. Our approach measures the relative importance of landscape features across agricultural landscapes. This approach was applied in the agricultural landscapes of Winterswijk, The Netherlands (n=191) and the Markische Schweiz, Germany (n=113) among visitors in the agricultural landscape. We set up a parallel designed choice experiment, using regionally specific, photorealistic visualizations of four comparable landscape attributes. In the Dutch landscape visitors highly value hedgerows and tree lines, whereas groups of trees and crop diversity are highly valued in the German landscape. Furthermore, we find that differences in relative preference for landscape attributes are, to some extent, explained by socio-cultural background variables such as education level and affinity with agriculture of the visitors. This approach contributes to a better understanding of the cross-regional variation of aesthetic and recreational values and how these values relate to characteristics of the agricultural landscape, which could support the integration of cultural services in landscape policy. (C) 2015 Elsevier B.V. All rights reserved.
In this paper, we analyse the effectiveness of flood management measures based on the concept known as "retaining water in the landscape". The investigated measures include afforestation, micro-ponds and small-reservoirs. A comparative and model-based methodological approach has been developed and applied for three meso-scale catchments located in different European hydro-climatological regions: Poyo (184 km(2)) in the Spanish Mediterranean, Upper Iller (954 km(2)) in the German Alps and Kamp (621 km(2)) in Northeast-Austria representing the Continental hydro-climate. This comparative analysis has found general similarities in spite of the particular differences among studied areas. In general terms, the flood reduction through the concept of "retaining water in the landscape" depends on the following factors: the storage capacity increase in the catchment resulting from such measures, the characteristics of the rainfall event, the antecedent soil moisture condition and the spatial distribution of such flood management measures in the catchment. In general, our study has shown that, this concept is effective for small and medium events, but almost negligible for the largest and less frequent floods: this holds true for all different hydro-climatic regions, and with different land-use, soils and morphological settings.
Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are also an important economical resource and through natural seepage or accidental release they can be major pollutants. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence, thereby hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008) separates the cells from the sediment matrix. In principle, this technique can also be used to separate cells from oily sediments, but it was not originally optimized for this application. Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification, and enumeration much easier. Consequently, the resulting cell counts from oily samples treated according to our new protocol are significantly higher than those treated according to Kallmeyer et al. (2008). We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and in samples containing more mature oils methanol, delivered the best results. However, as solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which hydrocarbon extraction is maximized and cell lysis minimized. A volumetric ratio of 1:2-1:5 between a formalin-fixed sediment slurry and solvent delivered highest cell counts. Extraction efficiency was around 30-50% and was checked on both oily samples spiked with known amounts of E. coli cells and oil-free samples amended with fresh and biodegraded oil. The method provided reproducible results on samples containing very different kinds of oils with regard to their degree of biodegradation. For strongly biodegraded oil MeOH turned out to be the most appropriate solvent, whereas for less biodegraded samples n-hexane delivered best results.
A 3-D crustal shear wave velocity model and Moho map below the Semail Ophiolite, eastern Arabia
(2022)
The Semail Ophiolite in eastern Arabia is the largest and best-exposed slice of oceanic lithosphere on land. Detailed knowledge of the tectonic evolution of the shallow crust, in particular during and after ophiolite obduction in Late Cretaceous times is contrasted by few constraints on physical and compositional properties of the middle and lower continental crust below the obducted units. The role of inherited, pre-obduction crustal architecture remains therefore unaccounted for in our understanding of crustal evolution and the present-day geology. Based on seismological data acquired during a 27-month campaign in northern Oman, Ambient Seismic Noise Tomography and Receiver Function analysis provide for the first time a 3-D radially anisotropic shear wave velocity (V-S) model and a consistent Moho map below the iconic Semail Ophiolite. The model highlights deep crustal boundaries that segment the eastern Arabian basement in two distinct units. The previously undescribed Western Jabal Akhdar Zone separates Arabian crust with typical continental properties and a thickness of similar to 40-45 km in the northwest from a compositionally different terrane in the southeast that is interpreted as a terrane accreted during the Pan-African orogeny in Neoproterozoic times. East of the Ibra Zone, another deep crustal boundary, crustal thickness decreases to 30-35 km and very high lower crustal V-S suggest large-scale mafic intrusions into, and possible underplating of the Arabian continental crust that occurred most likely during Permian breakup of Pangea. Mafic reworking is sharply bounded by the (upper crustal) Semail Gap Fault Zone, northwest of which no such high velocities are found in the crust. Topography of the Oman Mountains is supported by a mild crustal root and Moho depth below the highest topography, the Jabal Akhdar Dome, is similar to 42 km. Radial anisotropy is robustly resolved in the upper crust and aids in discriminating dipping allochthonous units from autochthonous sedimentary rocks that are indistinguishable by isotropic V-S alone. Lateral thickness variations of the ophiolite highlight the Haylayn Ophiolite Massif on the northern flank of Jabal Akhdar Dome and the Hawasina Window as the deepest reaching unit. Ophiolite thickness is similar to 10 km in the southern and northern massifs, and <= 5 km elsewhere.
Timing and magnitude of surface uplift are key to understanding the impact of crustal deformation and topographic growth on atmospheric circulation, environmental conditions, and surface processes. Uplift of the East African Plateau is linked to mantle processes, but paleoaltimetry data are too scarce to constrain plateau evolution and subsequent vertical motions associated with rifting. Here, we assess the paleotopographic implications of a beaked whale fossil (Ziphiidae) from the Turkana region of Kenya found 740 km inland from the present-day coastline of the Indian Ocean at an elevation of 620 m. The specimen is similar to 17 My old and represents the oldest derived beaked whale known, consistent with molecular estimates of the emergence of modern straptoothed whales (Mesoplodon). The whale traveled from the Indian Ocean inland along an eastward-directed drainage system controlled by the Cretaceous Anza Graben and was stranded slightly above sea level. Surface uplift from near sea level coincides with paleoclimatic change from a humid environment to highly variable and much drier conditions, which altered biotic communities and drove evolution in east Africa, including that of primates.