Refine
Year of publication
Document Type
- Postprint (22)
- Article (21)
- Doctoral Thesis (4)
- Master's Thesis (1)
Is part of the Bibliography
- yes (48) (remove)
Keywords
- model (48) (remove)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (16)
- Institut für Physik und Astronomie (9)
- Institut für Geowissenschaften (7)
- Department Psychologie (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Institut für Biochemie und Biologie (3)
- Institut für Ernährungswissenschaft (3)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Informatik und Computational Science (2)
- Department Erziehungswissenschaft (1)
With accelerating climate cooling in the late Cenozoic, glacial and periglacial erosion became more widespread on the surface of the Earth. The resultant shift in erosion patterns significantly changed the large-scale morphology of many mountain ranges worldwide. Whereas the glacial fingerprint is easily distinguished by its characteristic fjords and U-shaped valleys, the periglacial fingerprint is more subtle but potentially prevails in some mid- to high-latitude landscapes. Previous models have advocated a frost-driven control on debris production at steep headwalls and glacial valley sides. Here we investigate the important role that periglacial processes also play in less steep parts of mountain landscapes. Understanding the influences of frost-driven processes in low-relief areas requires a focus on the consequences of an accreting soil mantle, which characterises such surfaces. We present a new model that quantifies two key physical processes: frost cracking and frost creep, as a function of both temperature and sediment thickness. Our results yield new insights into how climate and sediment transport properties combine to scale the intensity of periglacial processes. The thickness of the soil mantle strongly modulates the relation between climate and the intensity of mechanical weathering and sediment flux. Our results also point to an offset between the conditions that promote frost cracking and those that promote frost creep, indicating that a stable climate can provide optimal conditions for only one of those processes at a time. Finally, quantifying these relations also opens up the possibility of including periglacial processes in large-scale, long-term landscape evolution models, as demonstrated in a companion paper.
Soil properties show high heterogeneity at different spatial scales and their correct characterization remains a crucial challenge over large areas. The aim of the study is to quantify the impact of different types of uncertainties that arise from the unresolved soil spatial variability on simulated hydrological states and fluxes. Three perturbation methods are presented for the characterization of uncertainties in soil properties. The methods are applied on the soil map of the upper Neckar catchment (Germany), as an example. The uncertainties are propagated through the distributed mesoscale hydrological model (mHM) to assess the impact on the simulated states and fluxes. The model outputs are analysed by aggregating the results at different spatial and temporal scales. These results show that the impact of the different uncertainties introduced in the original soil map is equivalent when the simulated model outputs are analysed at the model grid resolution (i.e. 500 m). However, several differences are identified by aggregating states and fluxes at different spatial scales (by subcatchments of different sizes or coarsening the grid resolution). Streamflow is only sensitive to the perturbation of long spatial structures while distributed states and fluxes (e.g. soil moisture and groundwater recharge) are only sensitive to the local noise introduced to the original soil properties. A clear identification of the temporal and spatial scale for which finer-resolution soil information is (or is not) relevant is unlikely to be universal. However, the comparison of the impacts on the different hydrological components can be used to prioritize the model improvements in specific applications, either by collecting new measurements or by calibration and data assimilation approaches. In conclusion, the study underlines the importance of a correct characterization of uncertainty in soil properties. With that, soil maps with additional information regarding the unresolved soil spatial variability would provide strong support to hydrological modelling applications.
Xanthomonas phaseoli pv. manihotis (Xpm) is the causal agent of cassava bacterial blight, the most important bacterial disease in this crop. There is a paucity of knowledge about the metabolism of Xanthomonas and its relevance in the pathogenic process, with the exception of the elucidation of the xanthan biosynthesis route. Here we report the reconstruction of the genome-scale model of Xpm metabolism and the insights it provides into plant-pathogen interactions. The model, iXpm1556, displayed 1,556 reactions, 1,527 compounds, and 890 genes. Metabolic maps of central amino acid and carbohydrate metabolism, as well as xanthan biosynthesis of Xpm, were reconstructed using Escher (https://escher.github.io/) to guide the curation process and for further analyses. The model was constrained using the RNA-seq data of a mutant of Xpm for quorum sensing (QS), and these data were used to construct context-specific models (CSMs) of the metabolism of the two strains (wild type and QS mutant). The CSMs and flux balance analysis were used to get insights into pathogenicity, xanthan biosynthesis, and QS mechanisms. Between the CSMs, 653 reactions were shared; unique reactions belong to purine, pyrimidine, and amino acid metabolism. Alternative objective functions were used to demonstrate a trade-off between xanthan biosynthesis and growth and the re-allocation of resources in the process of biosynthesis. Important features altered by QS included carbohydrate metabolism, NAD(P)(+) balance, and fatty acid elongation. In this work, we modeled the xanthan biosynthesis and the QS process and their impact on the metabolism of the bacterium. This model will be useful for researchers studying host-pathogen interactions and will provide insights into the mechanisms of infection used by this and other Xanthomonas species.
After a century of semi-restricted floodplain development, Southern Alberta, Canada, was struck by the devastating 2013 Flood. Aging infrastructure and limited property-level floodproofing likely contributed to the $4-6 billion (CAD) losses. Following this catastrophe, Alberta has seen a revival in flood management, largely focused on structural protections. However, concurrent with the recent structural work was a 100,000+ increase in Calgary's population in the 5 years following the flood, leading to further densification of high-hazard areas. This study implements the novel Stochastic Object-based Flood damage Dynamic Assessment (SOFDA) model framework to quantify the progression of the direct-damage flood risk in a mature urban neighborhood after the 2013 Flood. Five years of remote-sensing data, property assessment records, and inundation simulations following the flood are used to construct the model. Results show that in these 5 years, vulnerability trends (like densification) have increased flood risk by 4%; however, recent structural mitigation projects have reduced overall flood risk by 47% for this case study. These results demonstrate that the flood management revival in Southern Alberta has largely been successful at reducing flood risk; however, the gains are under threat from continued development and densification absent additional floodproofing regulations.
The current awareness of the high importance of urban green leads to a stronger need for tools to comprehensively represent urban green and its benefits. A common scientific approach is the development of urban ecosystem services (UES) based on remote sensing methods at the city or district level. Urban planning, however, requires fine-grained data that match local management practices. Hence, this study linked local biotope and tree mapping methods to the concept of ecosystem services. The methodology was tested in an inner-city district in SW Germany, comparing publicly accessible areas and non-accessible courtyards. The results provide area-specific [m(2)] information on the green inventory at the microscale, whereas derived stock and UES indicators form the basis for comparative analyses regarding climate adaptation and biodiversity. In the case study, there are ten times more micro-scale green spaces in private courtyards than in the public space, as well as twice as many trees. The approach transfers a scientific concept into municipal planning practice, enables the quantitative assessment of urban green at the microscale and illustrates the importance for green stock data in private areas to enhance decision support in urban development. Different aspects concerning data collection and data availability are critically discussed.
During reading, rapid eye movements (saccades) shift the reader's line of sight from one word to another for high-acuity visual information processing. While experimental data and theoretical models show that readers aim at word centers, the eye-movement (oculomotor) accuracy is low compared to other tasks. As a consequence, distributions of saccadic landing positions indicate large (i) random errors and (ii) systematic over- and undershoot of word centers, which additionally depend on saccade lengths (McConkie et al.Visual Research, 28(10), 1107-1118,1988). Here we show that both error components can be simultaneously reduced by reading texts from right to left in German language (N= 32). We used our experimental data to test a Bayesian model of saccade planning. First, experimental data are consistent with the model. Second, the model makes specific predictions of the effects of the precision of prior and (sensory) likelihood. Our results suggest that it is a more precise sensory likelihood that can explain the reduction of both random and systematic error components.
Nested application conditions generalise the well-known negative application conditions and are important for several application domains. In this paper, we present Local Church-Rosser, Parallelism, Concurrency and Amalgamation Theorems for rules with nested application conditions in the framework of M-adhesive categories, where M-adhesive categories are slightly more general than weak adhesive high-level replacement categories. Most of the proofs are based on the corresponding statements for rules without application conditions and two shift lemmas stating that nested application conditions can be shifted over morphisms and rules.
Linking together the processes of rapid physical erosion and the resultant chemical dissolution of rock is a crucial step in building an overall deterministic understanding of weathering in mountain belts. Landslides, which are the most volumetrically important geomorphic process at these high rates of erosion, can generate extremely high rates of very localised weathering. To elucidate how this process works we have taken advantage of uniquely intense landsliding, resulting from Typhoon Morakot, in the T'aimali River and surrounds in southern Taiwan. Combining detailed analysis of landslide seepage chemistry with estimates of catchment-by-catchment landslide volumes, we demonstrate that in this setting the primary role of landslides is to introduce fresh, highly labile mineral phases into the surface weathering environment. There, rapid weathering is driven by the oxidation of pyrite and the resultant sulfuric-acid-driven dissolution of primarily carbonate rock. The total dissolved load correlates well with dissolved sulfate - the chief product of this style of weathering - in both landslides and streams draining the area (R-2 = 0.841 and 0.929 respectively; p < 0.001 in both cases), with solute chemistry in seepage from landslides and catchments affected by significant landsliding governed by the same weathering reactions. The predominance of coupled carbonate-sulfuric-acid-driven weathering is the key difference between these sites and previously studied landslides in New Zealand (Emberson et al., 2016), but in both settings increasing volumes of landslides drive greater overall solute concentrations in streams.
Bedrock landslides, by excavating deep below saprolite-rock interfaces, create conditions for weathering in which all mineral phases in a lithology are initially unweathered within landslide deposits. As a result, the most labile phases dominate the weathering immediately after mobilisation and during a transient period of depletion. This mode of dissolution can strongly alter the overall output of solutes from catchments and their contribution to global chemical cycles if landslide-derived material is retained in catchments for extended periods after mass wasting.
In dieser Arbeit werden nichtlineare Kopplungsmechanismen von akustischen Oszillatoren untersucht, die zu Synchronisation führen können. Aufbauend auf die Fragestellungen vorangegangener Arbeiten werden mit Hilfe theoretischer und experimenteller Studien sowie mit Hilfe numerischer Simulationen die Elemente der Tonentstehung in der Orgelpfeife und die Mechanismen der gegenseitigen Wechselwirkung von Orgelpfeifen identifiziert. Daraus wird erstmalig ein vollständig auf den aeroakustischen und fluiddynamischen Grundprinzipien basierendes nichtlinear gekoppeltes Modell selbst-erregter Oszillatoren für die Beschreibung des Verhaltens zweier wechselwirkender Orgelpfeifen entwickelt. Die durchgeführten Modellrechnungen werden mit den experimentellen Befunden verglichen. Es zeigt sich, dass die Tonentstehung und die Kopplungsmechanismen von Orgelpfeifen durch das entwickelte Oszillatormodell in weiten Teilen richtig beschrieben werden. Insbesondere kann damit die Ursache für den nichtlinearen Zusammenhang von Kopplungsstärke und Synchronisation des gekoppelten Zwei-Pfeifen Systems, welcher sich in einem nichtlinearen Verlauf der Arnoldzunge darstellt, geklärt werden. Mit den gewonnenen Erkenntnissen wird der Einfluss des Raumes auf die Tonentstehung bei Orgelpfeifen betrachtet. Dafür werden numerische Simulationen der Wechselwirkung einer Orgelpfeife mit verschiedenen Raumgeometrien, wie z. B. ebene, konvexe, konkave, und gezahnte Geometrien, exemplarisch untersucht. Auch der Einfluss von Schwellkästen auf die Tonentstehung und die Klangbildung der Orgelpfeife wird studiert. In weiteren, neuartigen Synchronisationsexperimenten mit identisch gestimmten Orgelpfeifen, sowie mit Mixturen wird die Synchronisation für verschiedene, horizontale und vertikale Pfeifenabstände in der Ebene der Schallabstrahlung, untersucht. Die dabei erstmalig beobachteten räumlich isotropen Unstetigkeiten im Schwingungsverhalten der gekoppelten Pfeifensysteme, deuten auf abstandsabhängige Wechsel zwischen gegen- und gleichphasigen Sychronisationsregimen hin. Abschließend wird die Möglichkeit dokumentiert, das Phänomen der Synchronisation zweier Orgelpfeifen durch numerische Simulationen, also der Behandlung der kompressiblen Navier-Stokes Gleichungen mit entsprechenden Rand- und Anfangsbedingungen, realitätsnah abzubilden. Auch dies stellt ein Novum dar.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.
Paradoxical leadership behaviour (PLB) represents an emerging leadership construct that can help leaders deal with conflicting demands. In this paper, we report three studies that add to this nascent literature theoretically, methodologically, and empirically. In Study 1, we validate an effective short-form measure of global PLB using three different samples. In Studies 2 and 3, we draw on the job demands-resources model to propose that paradoxical leaders promote followers' work engagement by simultaneously fostering follower goal clarity and work autonomy. The results of survey data from Studies 2 and 3 largely confirm our model. Specifically, our findings show that PLB is positively associated with follower goal clarity and work autonomy, and that PLB exerts an indirect effect on work engagement via these variables. Moreover, our results support a hypothesized interaction effect of goal clarity and work autonomy to predict followers' work engagement, as well as a conditional indirect effect of PLB on work engagement via the interactive effect. We discuss the practical implications for leaders and organizations.
Practitioner points
To effectively engage followers in their work, leaders should create work environments in which followers know exactly what to do (i.e., have high goal clarity), but at the same time can determine on their own how to do their work (i.e., have high work autonomy)
To foster both goal clarity and work autonomy, leaders should combine communal (e.g., other-centred, flexibility-providing) and agentic aspects of leadership (e.g., maintaining decision control and enforcing performance standards).
HR departments should design leadership trainings that help leaders to combine seemingly opposing, yet ultimately synergistic behaviours.
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
The study deals with the identification and characterization of rapid subsurface flow structures through pedo- and geo-physical measurements and irrigation experiments at the point, plot and hillslope scale. Our investigation of flow-relevant structures and hydrological responses refers to the general interplay of form and function, respectively. To obtain a holistic picture of the subsurface, a large set of different laboratory, exploratory and experimental methods was used at the different scales. For exploration these methods included drilled soil core profiles, in situ measurements of infiltration capacity and saturated hydraulic conductivity, and laboratory analyses of soil water retention and saturated hydraulic conductivity. The irrigation experiments at the plot scale were monitored through a combination of dye tracer, salt tracer, soil moisture dynamics, and 3-D time-lapse ground penetrating radar (GPR) methods. At the hillslope scale the subsurface was explored by a 3-D GPR survey. A natural storm event and an irrigation experiment were monitored by a dense network of soil moisture observations and a cascade of 2-D time-lapse GPR "trenches". We show that the shift between activated and non-activated state of the flow paths is needed to distinguish structures from overall heterogeneity. Pedo-physical analyses of point-scale samples are the basis for sub-scale structure inference. At the plot and hillslope scale 3-D and 2-D time-lapse GPR applications are successfully employed as non-invasive means to image subsurface response patterns and to identify flow-relevant paths. Tracer recovery and soil water responses from irrigation experiments deliver a consistent estimate of response velocities. The combined observation of form and function under active conditions provides the means to localize and characterize the structures (this study) and the hydrological processes (companion study Angermann et al., 2017, this issue).
Global heat adaptation among urban populations and its evolution under different climate futures
(2022)
Heat and increasing ambient temperatures under climate change represent a serious threat to human health in cities. Heat exposure has been studied extensively at a global scale. Studies comparing a defined temperature threshold with the future daytime temperature during a certain period of time, had concluded an increase in threat to human health. Such findings however do not explicitly account for possible changes in future human heat adaptation and might even overestimate heat exposure. Thus, heat adaptation and its development is still unclear. Human heat adaptation refers to the local temperature to which populations are adjusted to. It can be inferred from the lowest point of the U- or V-shaped heat-mortality relationship (HMR), the Minimum Mortality Temperature (MMT). While epidemiological studies inform on the MMT at the city scale for case studies, a general model applicable at the global scale to infer on temporal change in MMTs had not yet been realised. The conventional approach depends on data availability, their robustness, and on the access to daily mortality records at the city scale. Thorough analysis however must account for future changes in the MMT as heat adaptation happens partially passively. Human heat adaptation consists of two aspects: (1) the intensity of the heat hazard that is still tolerated by human populations, meaning the heat burden they can bear and (2) the wealth-induced technological, social and behavioural measures that can be employed to avoid heat exposure. The objective of this thesis is to investigate and quantify human heat adaptation among urban populations at a global scale under the current climate and to project future adaptation under climate change until the end of the century. To date, this has not yet been accomplished. The evaluation of global heat adaptation among urban populations and its evolution under climate change comprises three levels of analysis. First, using the example of Germany, the MMT is calculated at the city level by applying the conventional method. Second, this thesis compiles a data pool of 400 urban MMTs to develop and train a new model capable of estimating MMTs on the basis of physical and socio-economic city characteristics using multivariate non-linear multivariate regression. The MMT is successfully described as a function of the current climate, the topography and the socio-economic standard, independently of daily mortality data for cities around the world. The city-specific MMT estimates represents a measure of human heat adaptation among the urban population. In a final third analysis, the model to derive human heat adaptation was adjusted to be driven by projected climate and socio-economic variables for the future. This allowed for estimation of the MMT and its change for 3 820 cities worldwide for different combinations of climate trajectories and socio-economic pathways until 2100. The knowledge on the evolution of heat adaptation in the future is a novelty as mostly heat exposure and its future development had been researched. In this work, changes in heat adaptation and exposure were analysed jointly. A wide range of possible health-related outcomes up to 2100 was the result, of which two scenarios with the highest socio-economic developments but opposing strong warming levels were highlighted for comparison. Strong economic growth based upon fossil fuel exploitation is associated with a high gain in heat adaptation, but may not be able to compensate for the associated negative health effects due to increased heat exposure in 30% to 40% of the cities investigated caused by severe climate change. A slightly less strong, but sustainable growth brings moderate gains in heat adaptation but a lower heat exposure and exposure reductions in 80% to 84% of the cities in terms of frequency (number of days exceeding the MMT) and intensity (magnitude of the MMT exceedance) due to a milder global warming. Choosing a 2 ° C compatible development by 2100 would therefore lower the risk of heat-related mortality at the end of the century. In summary, this thesis makes diverse and multidisciplinary contributions to a deeper understanding of human adaptation to heat under the current and the future climate. It is one of the first studies to carry out a systematic and statistical analysis of urban characteristics which are useful as MMT drivers to establish a generalised model of human heat adaptation, applicable at the global level. A broad range of possible heat-related health options for various future scenarios was shown for the first time. This work is of relevance for the assessment of heat-health impacts in regions where mortality data are not accessible or missing. The results are useful for health care planning at the meso- and macro-level and to urban- and climate change adaptation planning. Lastly, beyond having met the posed objective, this thesis advances research towards a global future impact assessment of heat on human health by providing an alternative method of MMT estimation, that is spatially and temporally flexible in its application.
Manganese (Mn) is an essential micronutrient for development and function of the nervous system. Deficiencies in Mn transport have been implicated in the pathogenesis of Huntington's disease (HD), an autosomal dominant neurodegenerative disorder characterized by loss of medium spiny neurons of the striatum. Brain Mn levels are highest in striatum and other basal ganglia structures, the most sensitive brain regions to Mn neurotoxicity. Mouse models of HD exhibit decreased striatal Mn accumulation and HD striatal neuron models are resistant to Mn cytotoxicity. We hypothesized that the observed modulation of Mn cellular transport is associated with compensatory metabolic responses to HD pathology. Here we use an untargeted metabolomics approach by performing ultraperformance liquid chromatography-ion mobility-mass spectrometry (UPLC-IM-MS) on control and HD immortalized mouse striatal neurons to identify metabolic disruptions under three Mn exposure conditions, low (vehicle), moderate (non-cytotoxic) and high (cytotoxic). Our analysis revealed lower metabolite levels of pantothenic acid, and glutathione (GSH) in HD striatal cells relative to control cells. HD striatal cells also exhibited lower abundance and impaired induction of isobutyryl carnitine in response to increasing Mn exposure. In addition, we observed induction of metabolites in the pentose shunt pathway in HD striatal cells after high Mn exposure. These findings provide metabolic evidence of an interaction between the HD genotype and biologically relevant levels of Mn in a striatal cell model with known HD by Mn exposure interactions. The metabolic phenotypes detected support existing hypotheses that changes in energetic processes underlie the pathobiology of both HD and Mn neurotoxicity.
Manganese (Mn) is an essential micronutrient for development and function of the nervous system. Deficiencies in Mn transport have been implicated in the pathogenesis of Huntington's disease (HD), an autosomal dominant neurodegenerative disorder characterized by loss of medium spiny neurons of the striatum. Brain Mn levels are highest in striatum and other basal ganglia structures, the most sensitive brain regions to Mn neurotoxicity. Mouse models of HD exhibit decreased striatal Mn accumulation and HD striatal neuron models are resistant to Mn cytotoxicity. We hypothesized that the observed modulation of Mn cellular transport is associated with compensatory metabolic responses to HD pathology. Here we use an untargeted metabolomics approach by performing ultraperformance liquid chromatography-ion mobility-mass spectrometry (UPLC-IM-MS) on control and HD immortalized mouse striatal neurons to identify metabolic disruptions under three Mn exposure conditions, low (vehicle), moderate (non-cytotoxic) and high (cytotoxic). Our analysis revealed lower metabolite levels of pantothenic acid, and glutathione (GSH) in HD striatal cells relative to control cells. HD striatal cells also exhibited lower abundance and impaired induction of isobutyryl carnitine in response to increasing Mn exposure. In addition, we observed induction of metabolites in the pentose shunt pathway in HD striatal cells after high Mn exposure. These findings provide metabolic evidence of an interaction between the HD genotype and biologically relevant levels of Mn in a striatal cell model with known HD by Mn exposure interactions. The metabolic phenotypes detected support existing hypotheses that changes in energetic processes underlie the pathobiology of both HD and Mn neurotoxicity.
Strong hydroclimatic controls on vulnerability to subsurface nitrate contamination across Europe
(2020)
Subsurface contamination due to excessive nutrient surpluses is a persistent and widespread problem in agricultural areas across Europe. The vulnerability of a particular location to pollution from reactive solutes, such as nitrate, is determined by the interplay between hydrologic transport and biogeochemical transformations. Current studies on the controls of subsurface vulnerability do not consider the transient behaviour of transport dynamics in the root zone. Here, using state-of-the-art hydrologic simulations driven by observed hydroclimatic forcing, we demonstrate the strong spatiotemporal heterogeneity of hydrologic transport dynamics and reveal that these dynamics are primarily controlled by the hydroclimatic gradient of the aridity index across Europe. Contrasting the space-time dynamics of transport times with reactive timescales of denitrification in soil indicate that similar to 75% of the cultivated areas across Europe are potentially vulnerable to nitrate leaching for at least onethird of the year. We find that neglecting the transient nature of transport and reaction timescale results in a great underestimation of the extent of vulnerable regions by almost 50%. Therefore, future vulnerability and risk assessment studies must account for the transient behaviour of transport and biogeochemical transformation processes.
Strong hydroclimatic controls on vulnerability to subsurface nitrate contamination across Europe
(2020)
Subsurface contamination due to excessive nutrient surpluses is a persistent and widespread problem in agricultural areas across Europe. The vulnerability of a particular location to pollution from reactive solutes, such as nitrate, is determined by the interplay between hydrologic transport and biogeochemical transformations. Current studies on the controls of subsurface vulnerability do not consider the transient behaviour of transport dynamics in the root zone. Here, using state-of-the-art hydrologic simulations driven by observed hydroclimatic forcing, we demonstrate the strong spatiotemporal heterogeneity of hydrologic transport dynamics and reveal that these dynamics are primarily controlled by the hydroclimatic gradient of the aridity index across Europe. Contrasting the space-time dynamics of transport times with reactive timescales of denitrification in soil indicate that similar to 75% of the cultivated areas across Europe are potentially vulnerable to nitrate leaching for at least onethird of the year. We find that neglecting the transient nature of transport and reaction timescale results in a great underestimation of the extent of vulnerable regions by almost 50%. Therefore, future vulnerability and risk assessment studies must account for the transient behaviour of transport and biogeochemical transformation processes.
Flash floods are caused by intense rainfall events and represent an insufficiently understood phenomenon in Germany. As a result of higher precipitation intensities, flash floods might occur more frequently in future. In combination with changing land use patterns and urbanisation, damage mitigation, insurance and risk management in flash-flood-prone regions are becoming increasingly important. However, a better understanding of damage caused by flash floods requires ex post collection of relevant but yet sparsely available information for research. At the end of May 2016, very high and concentrated rainfall intensities led to severe flash floods in several southern German municipalities. The small town of Braunsbach stood as a prime example of the devastating potential of such events. Eight to ten days after the flash flood event, damage assessment and data collection were conducted in Braunsbach by investigating all affected buildings and their surroundings. To record and store the data on site, the open-source software bundle KoBoCollect was used as an efficient and easy way to gather information. Since the damage driving factors of flash floods are expected to differ from those of riverine flooding, a post-hoc data analysis was performed, aiming to identify the influence of flood processes and building attributes on damage grades, which reflect the extent of structural damage. Data analyses include the application of random forest, a random general linear model and multinomial logistic regression as well as the construction of a local impact map to reveal influences on the damage grades. Further, a Spearman's Rho correlation matrix was calculated. The results reveal that the damage driving factors of flash floods differ from those of riverine floods to a certain extent. The exposition of a building in flow direction shows an especially strong correlation with the damage grade and has a high predictive power within the constructed damage models. Additionally, the results suggest that building materials as well as various building aspects, such as the existence of a shop window and the surroundings, might have an effect on the resulting damage. To verify and confirm the outcomes as well as to support future mitigation strategies, risk management and planning, more comprehensive and systematic data collection is necessary.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed.
Inventories of individually delineated landslides are a key to understanding landslide physics and mitigating their impact. They permit assessment of area–frequency distributions and landslide volumes, and testing of statistical correlations between landslides and physical parameters such as topographic gradient or seismic strong motion. Amalgamation, i.e. the mapping of several adjacent landslides as a single polygon, can lead to potentially severe distortion of the statistics of these inventories. This problem can be especially severe in data sets produced by automated mapping. We present five inventories of earthquake-induced landslides mapped with different materials and techniques and affected by varying degrees of amalgamation. Errors on the total landslide volume and power-law exponent of the area–frequency distribution, resulting from amalgamation, may be up to 200 and 50%, respectively. We present an algorithm based on image and digital elevation model (DEM) analysis, for automatic identification of amalgamated polygons. On a set of about 2000 polygons larger than 1000 m2, tracing landslides triggered by the 1994 Northridge earthquake, the algorithm performs well, with only 2.7–3.6% incorrectly amalgamated landslides missed and 3.9–4.8% correct polygons incorrectly identified as amalgams. This algorithm can be used broadly to check landslide inventories and allow faster correction by automating the identification of amalgamation.
Flood risk is impacted by a range of physical and socio-economic processes. Hence, the quantification of flood risk ideally considers the complete flood risk chain, from atmospheric processes through catchment and river system processes to damage mechanisms in the affected areas. Although it is generally accepted that a multitude of changes along the risk chain can occur and impact flood risk, there is a lack of knowledge of how and to what extent changes in influencing factors propagate through the chain and finally affect flood risk. To fill this gap, we present a comprehensive sensitivity analysis which considers changes in all risk components, i.e. changes in climate, catchment, river system, land use, assets, and vulnerability. The application of this framework to the mesoscale Mulde catchment in Germany shows that flood risk can vary dramatically as a consequence of plausible change scenarios. It further reveals that components that have not received much attention, such as changes in dike systems or in vulnerability, may outweigh changes in often investigated components, such as climate. Although the specific results are conditional on the case study area and the selected assumptions, they emphasize the need for a broader consideration of potential drivers of change in a comprehensive way. Hence, our approach contributes to a better understanding of how the different risk components influence the overall flood risk.
The pace-of-life syndrome (POLS) hypothesis posits that suites of traits are correlated along a slow-fast continuum owing to life history trade-offs. Despite widespread adoption, environmental conditions driving the emergence of POLS remain unclear. A recently proposed conceptual framework of POLS suggests that a slow-fast continuum should align to fluctuations in density-dependent selection. We tested three key predictions made by this framework with an ecoevolutionary agent-based population model. Selection acted on responsiveness (behavioral trait) to interpatch resource differences and the reproductive investment threshold (life history trait). Across environments with density fluctuations of different magnitudes, we observed the emergence of a common axis of trait covariation between and within populations (i.e., the evolution of a POLS). Slow-type (fast-type) populations with high (low) responsiveness and low (high) reproductive investment threshold were selected at high (low) population densities and less (more) intense and frequent density fluctuations. In support of the predictions, fast-type populations contained a higher degree of variation in traits and were associated with higher intrinsic reproductive rate (r(0)) and higher sensitivity to intraspecific competition (gamma), pointing to a universal trade-off. While our findings support that POLS aligns with density-dependent selection, we discuss possible mechanisms that may lead to alternative evolutionary pathways.
We study populations of globally coupled noisy rotators (oscillators with inertia) allowing a nonequilibrium transition from a desynchronized state to a synchronous one (with the nonvanishing order parameter). The newly developed analytical approaches resulted in solutions describing the synchronous state with constant order parameter for weakly inertial rotators, including the case of zero inertia, when the model is reduced to the Kuramoto model of coupled noise oscillators. These approaches provide also analytical criteria distinguishing supercritical and subcritical transitions to the desynchronized state and indicate the universality of such transitions in rotator ensembles. All the obtained analytical results are confirmed by the numerical ones, both by direct simulations of the large ensembles and by solution of the associated Fokker-Planck equation. We also propose generalizations of the developed approaches for setups where different rotators parameters (natural frequencies, masses, noise intensities, strengths and phase shifts in coupling) are dispersed.
A comprehensive hydrometeorological dataset is presented spanning the period 1 January 201131 December 2014 to improve the understanding of the hydrological processes leading to flash floods and the relation between rainfall, runoff, erosion and sediment transport in a mesoscale catchment (Auzon, 116 km(2)) of the Mediterranean region. Badlands are present in the Auzon catchment and well connected to high-gradient channels of bedrock rivers which promotes the transfer of suspended solids downstream. The number of observed variables, the various sensors involved (both in situ and remote) and the space-time resolution (similar to km(2), similar to min) of this comprehensive dataset make it a unique contribution to research communities focused on hydrometeorology, surface hydrology and erosion. Given that rainfall is highly variable in space and time in this region, the observation system enables assessment of the hydrological response to rainfall fields. Indeed, (i) rainfall data are provided by rain gauges (both a research network of 21 rain gauges with a 5 min time step and an operational network of 10 rain gauges with a 5 min or 1 h time step), S-band Doppler dual-polarization radars (1 km(2), 5 min resolution), disdrometers (16 sensors working at 30 s or 1 min time step) and Micro Rain Radars (5 sensors, 100m height resolution). Additionally, during the special observation period (SOP-1) of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) project, two X-band radars provided precipitation measurements at very fine spatial and temporal scales (1 ha, 5 min). (ii) Other meteorological data are taken from the operational surface weather observation stations of Meteo-France (including 2m air temperature, atmospheric pressure, 2 m relative humidity, 10m wind speed and direction, global radiation) at the hourly time resolution (six stations in the region of interest). (iii) The monitoring of surface hydrology and suspended sediment is multi-scale and based on nested catchments. Three hydrometric stations estimate water discharge at a 2-10 min time resolution. Two of these stations also measure additional physico-chemical variables (turbidity, temperature, conductivity) and water samples are collected automatically during floods, allowing further geochemical characterization of water and suspended solids. Two experimental plots monitor overland flow and erosion at 1 min time resolution on a hillslope with vineyard. A network of 11 sensors installed in the intermittent hydrographic network continuously measures water level and water temperature in headwater subcatchments (from 0.17 to 116 km(2)) at a time resolution of 2-5 min. A network of soil moisture sensors enables the continuous measurement of soil volumetric water content at 20 min time resolution at 9 sites. Additionally, concomitant observations (soil moisture measurements and stream gauging) were performed during floods between 2012 and 2014. Finally, this dataset is considered appropriate for understanding the rainfall variability in time and space at fine scales, improving areal rainfall estimations and progressing in distributed hydrological and erosion modelling.
The Sea-level Response to Ice Sheet Evolution (SeaRISE) effort explores the sensitivity of the current generation of ice sheet models to external forcing to gain insight into the potential future contribution to sea level from the Greenland and Antarctic ice sheets. All participating models simulated the ice sheet response to three types of external forcings: a change in oceanic condition, a warmer atmospheric environment, and enhanced basal lubrication. Here an analysis of the spatial response of the Greenland ice sheet is presented, and the impact of model physics and spin-up on the projections is explored. Although the modeled responses are not always homogeneous, consistent spatial trends emerge from the ensemble analysis, indicating distinct vulnerabilities of the Greenland ice sheet. There are clear response patterns associated with each forcing, and a similar mass loss at the full ice sheet scale will result in different mass losses at the regional scale, as well as distinct thickness changes over the ice sheet. All forcings lead to an increased mass loss for the coming centuries, with increased basal lubrication and warmer ocean conditions affecting mainly outlet glaciers, while the impacts of atmospheric forcings affect the whole ice sheet.
Nonstationary coherence-incoherence patterns in nonlocally coupled heterogeneous phase oscillators
(2020)
We consider a large ring of nonlocally coupled phase oscillators and show that apart from stationary chimera states, this system also supports nonstationary coherence-incoherence patterns (CIPs). For identical oscillators, these CIPs behave as breathing chimera states and are found in a relatively small parameter region only. It turns out that the stability region of these states enlarges dramatically if a certain amount of spatially uniform heterogeneity (e.g., Lorentzian distribution of natural frequencies) is introduced in the system. In this case, nonstationary CIPs can be studied as stable quasiperiodic solutions of a corresponding mean-field equation, formally describing the infinite system limit. Carrying out direct numerical simulations of the mean-field equation, we find different types of nonstationary CIPs with pulsing and/or alternating chimera-like behavior. Moreover, we reveal a complex bifurcation scenario underlying the transformation of these CIPs into each other. These theoretical predictions are confirmed by numerical simulations of the original coupled oscillator system.
Nearly 13,000 years ago, the warming trend into the Holocene was sharply interrupted by a reversal to near glacial conditions. Climatic causes and ecological consequences of the Younger Dryas (YD) have been extensively studied, however proxy archives from the Mediterranean basin capturing this period are scarce and do not provide annual resolution. Here, we report a hydroclimatic reconstruction from stable isotopes (delta O-18, delta C-13) in subfossil pines from southern France. Growing before and during the transition period into the YD (12 900-12 600 cal BP), the trees provide an annually resolved, continuous sequence of atmospheric change. Isotopic signature of tree sourcewater (delta O-18(sw)) and estimates of relative air humidity were reconstructed as a proxy for variations in air mass origin and precipitation regime. We find a distinct increase in inter-annual variability of sourcewater isotopes (delta O-18(sw)), with three major downturn phases of increasing magnitude beginning at 12 740 cal BP. The observed variation most likely results from an amplified intensity of North Atlantic (low delta O-18(sw)) versus Mediterranean (high delta O-18(sw)) precipitation. This marked pattern of climate variability is not seen in records from higher latitudes and is likely a consequence of atmospheric circulation oscillations at the margin of the southward moving polar front.
The economic assessment of the impacts of storm surges and sea-level rise in coastal cities requires high-level information on the damage and protection costs associated with varying flood heights. We provide a systematically and consistently calculated dataset of macroscale damage and protection cost curves for the 600 largest European coastal cities opening the perspective for a wide range of applications. Offering the first comprehensive dataset to include the costs of dike protection, we provide the underpinning information to run comparative assessments of costs and benefits of coastal adaptation. Aggregate cost curves for coastal flooding at the city-level are commonly regarded as by-products of impact assessments and are generally not published as a standalone dataset. Hence, our work also aims at initiating a more critical discussion on the availability and derivation of cost curves.
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties for both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic wind–damage relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two sources of wind gust data, direct observations by the German Weather Service and ERA-Interim reanalysis data. While the choice of gust data has little impact on the evaluation of German storm loss, spatially resolved coefficients of variation reveal dependence between model and data choice. The comparison shows that the probabilistic models by Heneka et al. (2006) and Prahl et al. (2012) both provide accurate loss predictions for moderate to extreme losses, with generally small coefficients of variation. We favour the latter model in terms of model applicability. Application of the versatile deterministic model by Klawa and Ulbrich (2003) should be restricted to extreme loss, for which it shows the least bias and errors comparable to the probabilistic model by Prahl et al. (2012).
Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.
Robust appraisals of climate impacts at different levels of global-mean temperature increase are vital to guide assessments of dangerous anthropogenic interference with the climate system. The 2015 Paris Agreement includes a two-headed temperature goal: "holding the increase in the global average temperature to well below 2 degrees C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 degrees C". Despite the prominence of these two temperature limits, a comprehensive overview of the differences in climate impacts at these levels is still missing. Here we provide an assessment of key impacts of climate change at warming levels of 1.5 degrees C and 2 degrees C, including extreme weather events, water availability, agricultural yields, sea-level rise and risk of coral reef loss. Our results reveal substantial differences in impacts between a 1.5 degrees C and 2 degrees C warming that are highly relevant for the assessment of dangerous anthropogenic interference with the climate system. For heat-related extremes, the additional 0.5 degrees C increase in global-mean temperature marks the difference between events at the upper limit of present-day natural variability and a new climate regime, particularly in tropical regions. Similarly, this warming difference is likely to be decisive for the future of tropical coral reefs. In a scenario with an end-of-century warming of 2 degrees C, virtually all tropical coral reefs are projected to be at risk of severe degradation due to temperature-induced bleaching from 2050 onwards. This fraction is reduced to about 90% in 2050 and projected to decline to 70% by 2100 for a 1.5 degrees C scenario. Analyses of precipitation-related impacts reveal distinct regional differences and hot-spots of change emerge. Regional reduction in median water availability for the Mediterranean is found to nearly double from 9% to 17% between 1.5 degrees C and 2 degrees C, and the projected lengthening of regional dry spells increases from 7 to 11%. Projections for agricultural yields differ between crop types as well as world regions. While some (in particular high-latitude) regions may benefit, tropical regions like West Africa, South-East Asia, as well as Central and northern South America are projected to face substantial local yield reductions, particularly for wheat and maize. Best estimate sea-level rise projections based on two illustrative scenarios indicate a 50cm rise by 2100 relative to year 2000-levels for a 2 degrees C scenario, and about 10 cm lower levels for a 1.5 degrees C scenario. In a 1.5 degrees C scenario, the rate of sea-level rise in 2100 would be reduced by about 30% compared to a 2 degrees C scenario. Our findings highlight the importance of regional differentiation to assess both future climate risks and different vulnerabilities to incremental increases in global-mean temperature. The article provides a consistent and comprehensive assessment of existing projections and a good basis for future work on refining our understanding of the difference between impacts at 1.5 degrees C and 2 degrees C warming.
Ermittlung historischer Parameter eines kleinen Einzugsgebietes am Beispiel des Pfefferfließes
(2010)
Am Beispiel eines Fließgewässers (Pfefferfließ) wurde unter Verwendung verschiedener Methoden die hydrologische Situation eines naturnahen Zustandes des 18. Jh. dargestellt bzw. ermittelt. Die Grundlage zur Ermittlung eines naturnahen Zustandes des 18. Jh. waren historische Daten wie z.B. Karten, Handschriften, Meliorationspläne. Die Detektierung bzw. Aufnahme historischer Querschnitte sowie die Modellierung des Abflusses im 18 Jh. tragen ebenfalls zu einer Generierung des Gesamtbildes im 18.Jh. bei. Die aus diesen Daten gewonnenen Erkenntnisse wurden auf die weitere Anwendung als Leitbild für Renaturierungsmaßnahmen überprüft.
A large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters to achieve an acceptable fit. In addition, we show that contrast gain-control with the fitted parameters results in a very sparse encoding of luminance information, in line with notions from efficient coding. Translating the standard early spatial vision model to be image-computable resulted in two further insights: First, the nonlinear processing requires a denser sampling of spatial frequency and orientation than optimal coding suggests. Second, the normalization needs to be fairly local in space to fit the data obtained with natural image masks. Finally, our image-computable model can serve as tool in future quantitative analyses: It allows optimized stimuli to be used to test the model and variants of it, with potential applications as an image-quality metric. In addition, it may serve as a building block for models of higher level processing.
Trade plays a key role in the spread of alien species and has arguably contributed to the recent enormous acceleration of biological invasions, thus homogenizing biotas worldwide. Combining data on 60-year trends of bilateral trade, as well as on biodiversity and climate, we modeled the global spread of plant species among 147 countries. The model results were compared with a recently compiled unique global data set on numbers of naturalized alien vascular plant species representing the most comprehensive collection of naturalized plant distributions currently available. The model identifies major source regions, introduction routes, and hot spots of plant invasions that agree well with observed naturalized plant numbers. In contrast to common knowledge, we show that the 'imperialist dogma,' stating that Europe has been a net exporter of naturalized plants since colonial times, does not hold for the past 60 years, when more naturalized plants were being imported to than exported from Europe. Our results highlight that the current distribution of naturalized plants is best predicted by socioeconomic activities 20 years ago. We took advantage of the observed time lag and used trade developments until recent times to predict naturalized plant trajectories for the next two decades. This shows that particularly strong increases in naturalized plant numbers are expected in the next 20 years for emerging economies in megadiverse regions. The interaction with predicted future climate change will increase invasions in northern temperate countries and reduce them in tropical and (sub) tropical regions, yet not by enough to cancel out the trade-related increase.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.
In this study, we present an empirical model of the equatorial electron pitch angle distributions (PADs) in the outer radiation belt based on the full data set collected by the Magnetic Electron Ion Spectrometer (MagEIS) instrument onboard the Van Allen Probes in 2012-2019. The PADs are fitted with a combination of the first, third and fifth sine harmonics. The resulting equation resolves all PAD types found in the outer radiation belt (pancake, flat-top, butterfly and cap PADs) and can be analytically integrated to derive omnidirectional flux. We introduce a two-step modeling procedure that for the first time ensures a continuous dependence on L, magnetic local time and activity, parametrized by the solar wind dynamic pressure. We propose two methods to reconstruct equatorial electron flux using the model. The first approach requires two uni-directional flux observations and is applicable to low-PA data. The second method can be used to reconstruct the full equatorial PADs from a single uni- or omnidirectional measurement at off-equatorial latitudes. The model can be used for converting the long-term data sets of electron fluxes to phase space density in terms of adiabatic invariants, for physics-based modeling in the form of boundary conditions, and for data assimilation purposes.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
Neodymium isotopic composition (epsilon Nd) has enjoyed widespread use as a palaeotracer, principally because it behaves quasi-conservatively in the modern ocean. However, recent bottom water epsilon Nd reconstructions from the eastern North Atlantic are difficult to interpret under assumptions of conservative behaviour. The observation that this apparent departure from conservative behaviour increases with enhanced ice-rafted debris (IRD) fluxes has resulted in the suggestion that IRD leads to the overprinting of bottom water epsilon Nd through reversible scavenging. In this study, a simple water column model successfully reproduces epsilon Nd reconstructions from the eastern North Atlantic at the Last Glacial Maximum and Heinrich Stadial 1, and demonstrates that the changes in scavenging intensity required for good model-data fit is in good agreement with changes in the observed IRD flux. Although uncertainties in model parameters preclude a more definitive conclusion, the results indicate that the suggestion of IRD as a source of non-conservative behaviour in the epsilon Nd tracer is reasonable and that further research into the fundamental chemistry underlying the marine neodymium cycle is necessary to increase confidence in assumptions of conservative epsilon Nd behaviour in the past.
In older persons, the origin of malnutrition is often multifactorial with a multitude of factors involved. Presently, a common understanding about potential causes and their mode of action is lacking, and a consensus on the theoretical framework on the etiology of malnutrition does not exist. Within the European Knowledge Hub "Malnutrition in the Elderly (MaNuEL)," a model of "Determinants of Malnutrition in Aged Persons" (DoMAP) was developed in a multistage consensus process with live meetings and written feedback (modified Delphi process) by a multiprofessional group of 33 experts in geriatric nutrition. DoMAP consists of three triangle-shaped levels with malnutrition in the center, surrounded by the three principal conditions through which malnutrition develops in the innermost level: low intake, high requirements, and impaired nutrient bioavailability. The middle level consists of factors directly causing one of these conditions, and the outermost level contains factors indirectly causing one of the three conditions through the direct factors. The DoMAP model may contribute to a common understanding about the multitude of factors involved in the etiology of malnutrition, and about potential causative mechanisms. It may serve as basis for future research and may also be helpful in clinical routine to identify persons at increased risk of malnutrition.
After the United Kingdom has left the European Union it remains unclear whether the two parties can successfully negotiate and sign a trade agreement within the transition period. Ongoing negotiations, practical obstacles and resulting uncertainties make it highly unlikely that economic actors would be fully prepared to a “no-trade-deal” situation. Here we provide an economic shock simulation of the immediate aftermath of such a post-Brexit no-trade-deal scenario by computing the time evolution of more than 1.8 million interactions between more than 6,600 economic actors in the global trade network. We find an abrupt decline in the number of goods produced in the UK and the EU. This sudden output reduction is caused by drops in demand as customers on the respective other side of the Channel incorporate the new trade restriction into their decision-making. As a response, producers reduce prices in order to stimulate demand elsewhere. In the short term consumers benefit from lower prices but production value decreases with potentially severe socio-economic consequences in the longer term.
After the United Kingdom has left the European Union it remains unclear whether the two parties can successfully negotiate and sign a trade agreement within the transition period. Ongoing negotiations, practical obstacles and resulting uncertainties make it highly unlikely that economic actors would be fully prepared to a “no-trade-deal” situation. Here we provide an economic shock simulation of the immediate aftermath of such a post-Brexit no-trade-deal scenario by computing the time evolution of more than 1.8 million interactions between more than 6,600 economic actors in the global trade network. We find an abrupt decline in the number of goods produced in the UK and the EU. This sudden output reduction is caused by drops in demand as customers on the respective other side of the Channel incorporate the new trade restriction into their decision-making. As a response, producers reduce prices in order to stimulate demand elsewhere. In the short term consumers benefit from lower prices but production value decreases with potentially severe socio-economic consequences in the longer term.
Isostasy is one of the oldest and most widely applied concepts in the geosciences, but the geoscientific community lacks a coherent, easy-to-use tool to simulate flexure of a realistic (i.e., laterally heterogeneous) lithosphere under an arbitrary set of surface loads. Such a model is needed for studies of mountain building, sedimentary basin formation, glaciation, sea-level change, and other tectonic, geodynamic, and surface processes. Here I present gFlex (for GNU flexure), an open-source model that can produce analytical and finite difference solutions for lithospheric flexure in one (profile) and two (map view) dimensions. To simulate the flexural isostatic response to an imposed load, it can be used by itself or within GRASS GIS for better integration with field data. gFlex is also a component with the Community Surface Dynamics Modeling System (CSDMS) and Landlab modeling frameworks for coupling with a wide range of Earth-surface-related models, and can be coupled to additional models within Python scripts. As an example of this in-script coupling, I simulate the effects of spatially variable lithospheric thickness on a modeled Iceland ice cap. Finite difference solutions in gFlex can use any of five types of boundary conditions: 0-displacement, 0-slope (i.e., clamped); 0-slope, 0-shear; 0-moment, 0-shear (i.e., broken plate); mirror symmetry; and periodic. Typical calculations with gFlex require << 1 s to similar to 1 min on a personal laptop computer. These characteristics - multiple ways to run the model, multiple solution methods, multiple boundary conditions, and short compute time - make gFlex an effective tool for flexural isostatic modeling across the geosciences.