Refine
Year of publication
Document Type
- Postprint (22)
- Article (21)
- Doctoral Thesis (4)
- Master's Thesis (1)
Is part of the Bibliography
- yes (48)
Keywords
- model (48) (remove)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (16)
- Institut für Physik und Astronomie (9)
- Institut für Geowissenschaften (7)
- Department Psychologie (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Institut für Biochemie und Biologie (3)
- Institut für Ernährungswissenschaft (3)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Informatik und Computational Science (2)
- Department Erziehungswissenschaft (1)
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.
After the United Kingdom has left the European Union it remains unclear whether the two parties can successfully negotiate and sign a trade agreement within the transition period. Ongoing negotiations, practical obstacles and resulting uncertainties make it highly unlikely that economic actors would be fully prepared to a “no-trade-deal” situation. Here we provide an economic shock simulation of the immediate aftermath of such a post-Brexit no-trade-deal scenario by computing the time evolution of more than 1.8 million interactions between more than 6,600 economic actors in the global trade network. We find an abrupt decline in the number of goods produced in the UK and the EU. This sudden output reduction is caused by drops in demand as customers on the respective other side of the Channel incorporate the new trade restriction into their decision-making. As a response, producers reduce prices in order to stimulate demand elsewhere. In the short term consumers benefit from lower prices but production value decreases with potentially severe socio-economic consequences in the longer term.
After the United Kingdom has left the European Union it remains unclear whether the two parties can successfully negotiate and sign a trade agreement within the transition period. Ongoing negotiations, practical obstacles and resulting uncertainties make it highly unlikely that economic actors would be fully prepared to a “no-trade-deal” situation. Here we provide an economic shock simulation of the immediate aftermath of such a post-Brexit no-trade-deal scenario by computing the time evolution of more than 1.8 million interactions between more than 6,600 economic actors in the global trade network. We find an abrupt decline in the number of goods produced in the UK and the EU. This sudden output reduction is caused by drops in demand as customers on the respective other side of the Channel incorporate the new trade restriction into their decision-making. As a response, producers reduce prices in order to stimulate demand elsewhere. In the short term consumers benefit from lower prices but production value decreases with potentially severe socio-economic consequences in the longer term.
In older persons, the origin of malnutrition is often multifactorial with a multitude of factors involved. Presently, a common understanding about potential causes and their mode of action is lacking, and a consensus on the theoretical framework on the etiology of malnutrition does not exist. Within the European Knowledge Hub "Malnutrition in the Elderly (MaNuEL)," a model of "Determinants of Malnutrition in Aged Persons" (DoMAP) was developed in a multistage consensus process with live meetings and written feedback (modified Delphi process) by a multiprofessional group of 33 experts in geriatric nutrition. DoMAP consists of three triangle-shaped levels with malnutrition in the center, surrounded by the three principal conditions through which malnutrition develops in the innermost level: low intake, high requirements, and impaired nutrient bioavailability. The middle level consists of factors directly causing one of these conditions, and the outermost level contains factors indirectly causing one of the three conditions through the direct factors. The DoMAP model may contribute to a common understanding about the multitude of factors involved in the etiology of malnutrition, and about potential causative mechanisms. It may serve as basis for future research and may also be helpful in clinical routine to identify persons at increased risk of malnutrition.
Neodymium isotopic composition (epsilon Nd) has enjoyed widespread use as a palaeotracer, principally because it behaves quasi-conservatively in the modern ocean. However, recent bottom water epsilon Nd reconstructions from the eastern North Atlantic are difficult to interpret under assumptions of conservative behaviour. The observation that this apparent departure from conservative behaviour increases with enhanced ice-rafted debris (IRD) fluxes has resulted in the suggestion that IRD leads to the overprinting of bottom water epsilon Nd through reversible scavenging. In this study, a simple water column model successfully reproduces epsilon Nd reconstructions from the eastern North Atlantic at the Last Glacial Maximum and Heinrich Stadial 1, and demonstrates that the changes in scavenging intensity required for good model-data fit is in good agreement with changes in the observed IRD flux. Although uncertainties in model parameters preclude a more definitive conclusion, the results indicate that the suggestion of IRD as a source of non-conservative behaviour in the epsilon Nd tracer is reasonable and that further research into the fundamental chemistry underlying the marine neodymium cycle is necessary to increase confidence in assumptions of conservative epsilon Nd behaviour in the past.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
In this study, we present an empirical model of the equatorial electron pitch angle distributions (PADs) in the outer radiation belt based on the full data set collected by the Magnetic Electron Ion Spectrometer (MagEIS) instrument onboard the Van Allen Probes in 2012-2019. The PADs are fitted with a combination of the first, third and fifth sine harmonics. The resulting equation resolves all PAD types found in the outer radiation belt (pancake, flat-top, butterfly and cap PADs) and can be analytically integrated to derive omnidirectional flux. We introduce a two-step modeling procedure that for the first time ensures a continuous dependence on L, magnetic local time and activity, parametrized by the solar wind dynamic pressure. We propose two methods to reconstruct equatorial electron flux using the model. The first approach requires two uni-directional flux observations and is applicable to low-PA data. The second method can be used to reconstruct the full equatorial PADs from a single uni- or omnidirectional measurement at off-equatorial latitudes. The model can be used for converting the long-term data sets of electron fluxes to phase space density in terms of adiabatic invariants, for physics-based modeling in the form of boundary conditions, and for data assimilation purposes.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.
Trade plays a key role in the spread of alien species and has arguably contributed to the recent enormous acceleration of biological invasions, thus homogenizing biotas worldwide. Combining data on 60-year trends of bilateral trade, as well as on biodiversity and climate, we modeled the global spread of plant species among 147 countries. The model results were compared with a recently compiled unique global data set on numbers of naturalized alien vascular plant species representing the most comprehensive collection of naturalized plant distributions currently available. The model identifies major source regions, introduction routes, and hot spots of plant invasions that agree well with observed naturalized plant numbers. In contrast to common knowledge, we show that the 'imperialist dogma,' stating that Europe has been a net exporter of naturalized plants since colonial times, does not hold for the past 60 years, when more naturalized plants were being imported to than exported from Europe. Our results highlight that the current distribution of naturalized plants is best predicted by socioeconomic activities 20 years ago. We took advantage of the observed time lag and used trade developments until recent times to predict naturalized plant trajectories for the next two decades. This shows that particularly strong increases in naturalized plant numbers are expected in the next 20 years for emerging economies in megadiverse regions. The interaction with predicted future climate change will increase invasions in northern temperate countries and reduce them in tropical and (sub) tropical regions, yet not by enough to cancel out the trade-related increase.
A large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters to achieve an acceptable fit. In addition, we show that contrast gain-control with the fitted parameters results in a very sparse encoding of luminance information, in line with notions from efficient coding. Translating the standard early spatial vision model to be image-computable resulted in two further insights: First, the nonlinear processing requires a denser sampling of spatial frequency and orientation than optimal coding suggests. Second, the normalization needs to be fairly local in space to fit the data obtained with natural image masks. Finally, our image-computable model can serve as tool in future quantitative analyses: It allows optimized stimuli to be used to test the model and variants of it, with potential applications as an image-quality metric. In addition, it may serve as a building block for models of higher level processing.
Ermittlung historischer Parameter eines kleinen Einzugsgebietes am Beispiel des Pfefferfließes
(2010)
Am Beispiel eines Fließgewässers (Pfefferfließ) wurde unter Verwendung verschiedener Methoden die hydrologische Situation eines naturnahen Zustandes des 18. Jh. dargestellt bzw. ermittelt. Die Grundlage zur Ermittlung eines naturnahen Zustandes des 18. Jh. waren historische Daten wie z.B. Karten, Handschriften, Meliorationspläne. Die Detektierung bzw. Aufnahme historischer Querschnitte sowie die Modellierung des Abflusses im 18 Jh. tragen ebenfalls zu einer Generierung des Gesamtbildes im 18.Jh. bei. Die aus diesen Daten gewonnenen Erkenntnisse wurden auf die weitere Anwendung als Leitbild für Renaturierungsmaßnahmen überprüft.
Robust appraisals of climate impacts at different levels of global-mean temperature increase are vital to guide assessments of dangerous anthropogenic interference with the climate system. The 2015 Paris Agreement includes a two-headed temperature goal: "holding the increase in the global average temperature to well below 2 degrees C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 degrees C". Despite the prominence of these two temperature limits, a comprehensive overview of the differences in climate impacts at these levels is still missing. Here we provide an assessment of key impacts of climate change at warming levels of 1.5 degrees C and 2 degrees C, including extreme weather events, water availability, agricultural yields, sea-level rise and risk of coral reef loss. Our results reveal substantial differences in impacts between a 1.5 degrees C and 2 degrees C warming that are highly relevant for the assessment of dangerous anthropogenic interference with the climate system. For heat-related extremes, the additional 0.5 degrees C increase in global-mean temperature marks the difference between events at the upper limit of present-day natural variability and a new climate regime, particularly in tropical regions. Similarly, this warming difference is likely to be decisive for the future of tropical coral reefs. In a scenario with an end-of-century warming of 2 degrees C, virtually all tropical coral reefs are projected to be at risk of severe degradation due to temperature-induced bleaching from 2050 onwards. This fraction is reduced to about 90% in 2050 and projected to decline to 70% by 2100 for a 1.5 degrees C scenario. Analyses of precipitation-related impacts reveal distinct regional differences and hot-spots of change emerge. Regional reduction in median water availability for the Mediterranean is found to nearly double from 9% to 17% between 1.5 degrees C and 2 degrees C, and the projected lengthening of regional dry spells increases from 7 to 11%. Projections for agricultural yields differ between crop types as well as world regions. While some (in particular high-latitude) regions may benefit, tropical regions like West Africa, South-East Asia, as well as Central and northern South America are projected to face substantial local yield reductions, particularly for wheat and maize. Best estimate sea-level rise projections based on two illustrative scenarios indicate a 50cm rise by 2100 relative to year 2000-levels for a 2 degrees C scenario, and about 10 cm lower levels for a 1.5 degrees C scenario. In a 1.5 degrees C scenario, the rate of sea-level rise in 2100 would be reduced by about 30% compared to a 2 degrees C scenario. Our findings highlight the importance of regional differentiation to assess both future climate risks and different vulnerabilities to incremental increases in global-mean temperature. The article provides a consistent and comprehensive assessment of existing projections and a good basis for future work on refining our understanding of the difference between impacts at 1.5 degrees C and 2 degrees C warming.
Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.
Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties for both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic wind–damage relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two sources of wind gust data, direct observations by the German Weather Service and ERA-Interim reanalysis data. While the choice of gust data has little impact on the evaluation of German storm loss, spatially resolved coefficients of variation reveal dependence between model and data choice. The comparison shows that the probabilistic models by Heneka et al. (2006) and Prahl et al. (2012) both provide accurate loss predictions for moderate to extreme losses, with generally small coefficients of variation. We favour the latter model in terms of model applicability. Application of the versatile deterministic model by Klawa and Ulbrich (2003) should be restricted to extreme loss, for which it shows the least bias and errors comparable to the probabilistic model by Prahl et al. (2012).
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
The economic assessment of the impacts of storm surges and sea-level rise in coastal cities requires high-level information on the damage and protection costs associated with varying flood heights. We provide a systematically and consistently calculated dataset of macroscale damage and protection cost curves for the 600 largest European coastal cities opening the perspective for a wide range of applications. Offering the first comprehensive dataset to include the costs of dike protection, we provide the underpinning information to run comparative assessments of costs and benefits of coastal adaptation. Aggregate cost curves for coastal flooding at the city-level are commonly regarded as by-products of impact assessments and are generally not published as a standalone dataset. Hence, our work also aims at initiating a more critical discussion on the availability and derivation of cost curves.
Nearly 13,000 years ago, the warming trend into the Holocene was sharply interrupted by a reversal to near glacial conditions. Climatic causes and ecological consequences of the Younger Dryas (YD) have been extensively studied, however proxy archives from the Mediterranean basin capturing this period are scarce and do not provide annual resolution. Here, we report a hydroclimatic reconstruction from stable isotopes (delta O-18, delta C-13) in subfossil pines from southern France. Growing before and during the transition period into the YD (12 900-12 600 cal BP), the trees provide an annually resolved, continuous sequence of atmospheric change. Isotopic signature of tree sourcewater (delta O-18(sw)) and estimates of relative air humidity were reconstructed as a proxy for variations in air mass origin and precipitation regime. We find a distinct increase in inter-annual variability of sourcewater isotopes (delta O-18(sw)), with three major downturn phases of increasing magnitude beginning at 12 740 cal BP. The observed variation most likely results from an amplified intensity of North Atlantic (low delta O-18(sw)) versus Mediterranean (high delta O-18(sw)) precipitation. This marked pattern of climate variability is not seen in records from higher latitudes and is likely a consequence of atmospheric circulation oscillations at the margin of the southward moving polar front.