Refine
Has Fulltext
- yes (22)
Document Type
- Postprint (22) (remove)
Language
- English (22)
Is part of the Bibliography
- yes (22)
Keywords
- model (22) (remove)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (16)
- Institut für Geowissenschaften (3)
- Institut für Physik und Astronomie (2)
- Department Psychologie (1)
- Hasso-Plattner-Institut für Digital Engineering GmbH (1)
- Humanwissenschaftliche Fakultät (1)
- Institut für Ernährungswissenschaft (1)
- Institut für Umweltwissenschaften und Geographie (1)
Nested application conditions generalise the well-known negative application conditions and are important for several application domains. In this paper, we present Local Church-Rosser, Parallelism, Concurrency and Amalgamation Theorems for rules with nested application conditions in the framework of M-adhesive categories, where M-adhesive categories are slightly more general than weak adhesive high-level replacement categories. Most of the proofs are based on the corresponding statements for rules without application conditions and two shift lemmas stating that nested application conditions can be shifted over morphisms and rules.
With accelerating climate cooling in the late Cenozoic, glacial and periglacial erosion became more widespread on the surface of the Earth. The resultant shift in erosion patterns significantly changed the large-scale morphology of many mountain ranges worldwide. Whereas the glacial fingerprint is easily distinguished by its characteristic fjords and U-shaped valleys, the periglacial fingerprint is more subtle but potentially prevails in some mid- to high-latitude landscapes. Previous models have advocated a frost-driven control on debris production at steep headwalls and glacial valley sides. Here we investigate the important role that periglacial processes also play in less steep parts of mountain landscapes. Understanding the influences of frost-driven processes in low-relief areas requires a focus on the consequences of an accreting soil mantle, which characterises such surfaces. We present a new model that quantifies two key physical processes: frost cracking and frost creep, as a function of both temperature and sediment thickness. Our results yield new insights into how climate and sediment transport properties combine to scale the intensity of periglacial processes. The thickness of the soil mantle strongly modulates the relation between climate and the intensity of mechanical weathering and sediment flux. Our results also point to an offset between the conditions that promote frost cracking and those that promote frost creep, indicating that a stable climate can provide optimal conditions for only one of those processes at a time. Finally, quantifying these relations also opens up the possibility of including periglacial processes in large-scale, long-term landscape evolution models, as demonstrated in a companion paper.
Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties for both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic wind–damage relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two sources of wind gust data, direct observations by the German Weather Service and ERA-Interim reanalysis data. While the choice of gust data has little impact on the evaluation of German storm loss, spatially resolved coefficients of variation reveal dependence between model and data choice. The comparison shows that the probabilistic models by Heneka et al. (2006) and Prahl et al. (2012) both provide accurate loss predictions for moderate to extreme losses, with generally small coefficients of variation. We favour the latter model in terms of model applicability. Application of the versatile deterministic model by Klawa and Ulbrich (2003) should be restricted to extreme loss, for which it shows the least bias and errors comparable to the probabilistic model by Prahl et al. (2012).
Manganese (Mn) is an essential micronutrient for development and function of the nervous system. Deficiencies in Mn transport have been implicated in the pathogenesis of Huntington's disease (HD), an autosomal dominant neurodegenerative disorder characterized by loss of medium spiny neurons of the striatum. Brain Mn levels are highest in striatum and other basal ganglia structures, the most sensitive brain regions to Mn neurotoxicity. Mouse models of HD exhibit decreased striatal Mn accumulation and HD striatal neuron models are resistant to Mn cytotoxicity. We hypothesized that the observed modulation of Mn cellular transport is associated with compensatory metabolic responses to HD pathology. Here we use an untargeted metabolomics approach by performing ultraperformance liquid chromatography-ion mobility-mass spectrometry (UPLC-IM-MS) on control and HD immortalized mouse striatal neurons to identify metabolic disruptions under three Mn exposure conditions, low (vehicle), moderate (non-cytotoxic) and high (cytotoxic). Our analysis revealed lower metabolite levels of pantothenic acid, and glutathione (GSH) in HD striatal cells relative to control cells. HD striatal cells also exhibited lower abundance and impaired induction of isobutyryl carnitine in response to increasing Mn exposure. In addition, we observed induction of metabolites in the pentose shunt pathway in HD striatal cells after high Mn exposure. These findings provide metabolic evidence of an interaction between the HD genotype and biologically relevant levels of Mn in a striatal cell model with known HD by Mn exposure interactions. The metabolic phenotypes detected support existing hypotheses that changes in energetic processes underlie the pathobiology of both HD and Mn neurotoxicity.
Inventories of individually delineated landslides are a key to understanding landslide physics and mitigating their impact. They permit assessment of area–frequency distributions and landslide volumes, and testing of statistical correlations between landslides and physical parameters such as topographic gradient or seismic strong motion. Amalgamation, i.e. the mapping of several adjacent landslides as a single polygon, can lead to potentially severe distortion of the statistics of these inventories. This problem can be especially severe in data sets produced by automated mapping. We present five inventories of earthquake-induced landslides mapped with different materials and techniques and affected by varying degrees of amalgamation. Errors on the total landslide volume and power-law exponent of the area–frequency distribution, resulting from amalgamation, may be up to 200 and 50%, respectively. We present an algorithm based on image and digital elevation model (DEM) analysis, for automatic identification of amalgamated polygons. On a set of about 2000 polygons larger than 1000 m2, tracing landslides triggered by the 1994 Northridge earthquake, the algorithm performs well, with only 2.7–3.6% incorrectly amalgamated landslides missed and 3.9–4.8% correct polygons incorrectly identified as amalgams. This algorithm can be used broadly to check landslide inventories and allow faster correction by automating the identification of amalgamation.
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.
Robust appraisals of climate impacts at different levels of global-mean temperature increase are vital to guide assessments of dangerous anthropogenic interference with the climate system. The 2015 Paris Agreement includes a two-headed temperature goal: "holding the increase in the global average temperature to well below 2 degrees C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 degrees C". Despite the prominence of these two temperature limits, a comprehensive overview of the differences in climate impacts at these levels is still missing. Here we provide an assessment of key impacts of climate change at warming levels of 1.5 degrees C and 2 degrees C, including extreme weather events, water availability, agricultural yields, sea-level rise and risk of coral reef loss. Our results reveal substantial differences in impacts between a 1.5 degrees C and 2 degrees C warming that are highly relevant for the assessment of dangerous anthropogenic interference with the climate system. For heat-related extremes, the additional 0.5 degrees C increase in global-mean temperature marks the difference between events at the upper limit of present-day natural variability and a new climate regime, particularly in tropical regions. Similarly, this warming difference is likely to be decisive for the future of tropical coral reefs. In a scenario with an end-of-century warming of 2 degrees C, virtually all tropical coral reefs are projected to be at risk of severe degradation due to temperature-induced bleaching from 2050 onwards. This fraction is reduced to about 90% in 2050 and projected to decline to 70% by 2100 for a 1.5 degrees C scenario. Analyses of precipitation-related impacts reveal distinct regional differences and hot-spots of change emerge. Regional reduction in median water availability for the Mediterranean is found to nearly double from 9% to 17% between 1.5 degrees C and 2 degrees C, and the projected lengthening of regional dry spells increases from 7 to 11%. Projections for agricultural yields differ between crop types as well as world regions. While some (in particular high-latitude) regions may benefit, tropical regions like West Africa, South-East Asia, as well as Central and northern South America are projected to face substantial local yield reductions, particularly for wheat and maize. Best estimate sea-level rise projections based on two illustrative scenarios indicate a 50cm rise by 2100 relative to year 2000-levels for a 2 degrees C scenario, and about 10 cm lower levels for a 1.5 degrees C scenario. In a 1.5 degrees C scenario, the rate of sea-level rise in 2100 would be reduced by about 30% compared to a 2 degrees C scenario. Our findings highlight the importance of regional differentiation to assess both future climate risks and different vulnerabilities to incremental increases in global-mean temperature. The article provides a consistent and comprehensive assessment of existing projections and a good basis for future work on refining our understanding of the difference between impacts at 1.5 degrees C and 2 degrees C warming.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.