Refine
Year of publication
Document Type
- Postprint (20)
- Article (19)
- Doctoral Thesis (4)
- Master's Thesis (1)
Is part of the Bibliography
- yes (44)
Keywords
- model (44) (remove)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (16)
- Institut für Physik und Astronomie (9)
- Institut für Geowissenschaften (5)
- Department Psychologie (3)
- Institut für Biochemie und Biologie (3)
- Institut für Ernährungswissenschaft (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Informatik und Computational Science (2)
- Department Erziehungswissenschaft (1)
3D from 2D touch
(2013)
While interaction with computers used to be dominated by mice and keyboards, new types of sensors now allow users to interact through touch, speech, or using their whole body in 3D space. These new interaction modalities are often referred to as "natural user interfaces" or "NUIs." While 2D NUIs have experienced major success on billions of mobile touch devices sold, 3D NUI systems have so far been unable to deliver a mobile form factor, mainly due to their use of cameras. The fact that cameras require a certain distance from the capture volume has prevented 3D NUI systems from reaching the flat form factor mobile users expect. In this dissertation, we address this issue by sensing 3D input using flat 2D sensors. The systems we present observe the input from 3D objects as 2D imprints upon physical contact. By sampling these imprints at very high resolutions, we obtain the objects' textures. In some cases, a texture uniquely identifies a biometric feature, such as the user's fingerprint. In other cases, an imprint stems from the user's clothing, such as when walking on multitouch floors. By analyzing from which part of the 3D object the 2D imprint results, we reconstruct the object's pose in 3D space. While our main contribution is a general approach to sensing 3D input on 2D sensors upon physical contact, we also demonstrate three applications of our approach. (1) We present high-accuracy touch devices that allow users to reliably touch targets that are a third of the size of those on current touch devices. We show that different users and 3D finger poses systematically affect touch sensing, which current devices perceive as random input noise. We introduce a model for touch that compensates for this systematic effect by deriving the 3D finger pose and the user's identity from each touch imprint. We then investigate this systematic effect in detail and explore how users conceptually touch targets. Our findings indicate that users aim by aligning visual features of their fingers with the target. We present a visual model for touch input that eliminates virtually all systematic effects on touch accuracy. (2) From each touch, we identify users biometrically by analyzing their fingerprints. Our prototype Fiberio integrates fingerprint scanning and a display into the same flat surface, solving a long-standing problem in human-computer interaction: secure authentication on touchscreens. Sensing 3D input and authenticating users upon touch allows Fiberio to implement a variety of applications that traditionally require the bulky setups of current 3D NUI systems. (3) To demonstrate the versatility of 3D reconstruction on larger touch surfaces, we present a high-resolution pressure-sensitive floor that resolves the texture of objects upon touch. Using the same principles as before, our system GravitySpace analyzes all imprints and identifies users based on their shoe soles, detects furniture, and enables accurate touch input using feet. By classifying all imprints, GravitySpace detects the users' body parts that are in contact with the floor and then reconstructs their 3D body poses using inverse kinematics. GravitySpace thus enables a range of applications for future 3D NUI systems based on a flat sensor, such as smart rooms in future homes. We conclude this dissertation by projecting into the future of mobile devices. Focusing on the mobility aspect of our work, we explore how NUI devices may one day augment users directly in the form of implanted devices.
A comprehensive hydrometeorological dataset is presented spanning the period 1 January 201131 December 2014 to improve the understanding of the hydrological processes leading to flash floods and the relation between rainfall, runoff, erosion and sediment transport in a mesoscale catchment (Auzon, 116 km(2)) of the Mediterranean region. Badlands are present in the Auzon catchment and well connected to high-gradient channels of bedrock rivers which promotes the transfer of suspended solids downstream. The number of observed variables, the various sensors involved (both in situ and remote) and the space-time resolution (similar to km(2), similar to min) of this comprehensive dataset make it a unique contribution to research communities focused on hydrometeorology, surface hydrology and erosion. Given that rainfall is highly variable in space and time in this region, the observation system enables assessment of the hydrological response to rainfall fields. Indeed, (i) rainfall data are provided by rain gauges (both a research network of 21 rain gauges with a 5 min time step and an operational network of 10 rain gauges with a 5 min or 1 h time step), S-band Doppler dual-polarization radars (1 km(2), 5 min resolution), disdrometers (16 sensors working at 30 s or 1 min time step) and Micro Rain Radars (5 sensors, 100m height resolution). Additionally, during the special observation period (SOP-1) of the HyMeX (Hydrological Cycle in the Mediterranean Experiment) project, two X-band radars provided precipitation measurements at very fine spatial and temporal scales (1 ha, 5 min). (ii) Other meteorological data are taken from the operational surface weather observation stations of Meteo-France (including 2m air temperature, atmospheric pressure, 2 m relative humidity, 10m wind speed and direction, global radiation) at the hourly time resolution (six stations in the region of interest). (iii) The monitoring of surface hydrology and suspended sediment is multi-scale and based on nested catchments. Three hydrometric stations estimate water discharge at a 2-10 min time resolution. Two of these stations also measure additional physico-chemical variables (turbidity, temperature, conductivity) and water samples are collected automatically during floods, allowing further geochemical characterization of water and suspended solids. Two experimental plots monitor overland flow and erosion at 1 min time resolution on a hillslope with vineyard. A network of 11 sensors installed in the intermittent hydrographic network continuously measures water level and water temperature in headwater subcatchments (from 0.17 to 116 km(2)) at a time resolution of 2-5 min. A network of soil moisture sensors enables the continuous measurement of soil volumetric water content at 20 min time resolution at 9 sites. Additionally, concomitant observations (soil moisture measurements and stream gauging) were performed during floods between 2012 and 2014. Finally, this dataset is considered appropriate for understanding the rainfall variability in time and space at fine scales, improving areal rainfall estimations and progressing in distributed hydrological and erosion modelling.
In recent decades, the Greenland Ice Sheet has been losing mass and has thereby contributed to global sea-level rise. The rate of ice loss is highly relevant for coastal protection worldwide. The ice loss is likely to increase under future warming. Beyond a critical temperature threshold, a meltdown of the Greenland Ice Sheet is induced by the self-enforcing feedback between its lowering surface elevation and its increasing surface mass loss: the more ice that is lost, the lower the ice surface and the warmer the surface air temperature, which fosters further melting and ice loss. The computation of this rate so far relies on complex numerical models which are the appropriate tools for capturing the complexity of the problem. By contrast we aim here at gaining a conceptual understanding by deriving a purposefully simple equation for the self-enforcing feedback which is then used to estimate the melt time for different levels of warming using three observable characteristics of the ice sheet itself and its surroundings. The analysis is purely conceptual in nature. It is missing important processes like ice dynamics for it to be useful for applications to sea-level rise on centennial timescales, but if the volume loss is dominated by the feedback, the resulting logarithmic equation unifies existing numerical simulations and shows that the melt time depends strongly on the level of warming with a critical slow-down near the threshold: the median time to lose 10% of the present-day ice volume varies between about 3500 years for a temperature level of 0.5 degrees C above the threshold and 500 years for 5 degrees C. Unless future observations show a significantly higher melting sensitivity than currently observed, a complete meltdown is unlikely within the next 2000 years without significant ice-dynamical contributions.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.
The Cluster mission has produced a large data set of electron flux measurements in the Earth's magnetosphere since its launch in late 2000. Electron fluxes are measured using Research with Adaptive Particle Imaging Detector (RAPID)/Imaging Electron Spectrometer (IES) detector as a function of energy, pitch angle, spacecraft position, and time. However, no adiabatic invariants have been calculated for Cluster so far. In this paper we present a step-by-step guide to calculations of adiabatic invariants and conversion of the electron flux to phase space density (PSD) in these coordinates. The electron flux is measured in two RAPID/IES energy channels providing pitch angle distribution at energies 39.2-50.5 and 68.1-94.5 keV in nominal mode since 2004. A fitting method allows to expand the conversion of the differential fluxes to the range from 40 to 150 keV. Best data coverage for phase space density in adiabatic invariant coordinates can be obtained for values of second adiabatic invariant, K, similar to 10(2), and values of the first adiabatic invariant mu in the range approximate to 5-20 MeV/G. Furthermore, we describe the production of a new data product "LSTAR," equivalent to the third adiabatic invariant, available through the Cluster Science Archive for years 2001-2018 with 1-min resolution. The produced data set adds to the availability of observations in Earth's radiation belts region and can be used for long-term statistical purposes.
Inventories of individually delineated landslides are a key to understanding landslide physics and mitigating their impact. They permit assessment of area–frequency distributions and landslide volumes, and testing of statistical correlations between landslides and physical parameters such as topographic gradient or seismic strong motion. Amalgamation, i.e. the mapping of several adjacent landslides as a single polygon, can lead to potentially severe distortion of the statistics of these inventories. This problem can be especially severe in data sets produced by automated mapping. We present five inventories of earthquake-induced landslides mapped with different materials and techniques and affected by varying degrees of amalgamation. Errors on the total landslide volume and power-law exponent of the area–frequency distribution, resulting from amalgamation, may be up to 200 and 50%, respectively. We present an algorithm based on image and digital elevation model (DEM) analysis, for automatic identification of amalgamated polygons. On a set of about 2000 polygons larger than 1000 m2, tracing landslides triggered by the 1994 Northridge earthquake, the algorithm performs well, with only 2.7–3.6% incorrectly amalgamated landslides missed and 3.9–4.8% correct polygons incorrectly identified as amalgams. This algorithm can be used broadly to check landslide inventories and allow faster correction by automating the identification of amalgamation.
In this study, we present an empirical model of the equatorial electron pitch angle distributions (PADs) in the outer radiation belt based on the full data set collected by the Magnetic Electron Ion Spectrometer (MagEIS) instrument onboard the Van Allen Probes in 2012-2019. The PADs are fitted with a combination of the first, third and fifth sine harmonics. The resulting equation resolves all PAD types found in the outer radiation belt (pancake, flat-top, butterfly and cap PADs) and can be analytically integrated to derive omnidirectional flux. We introduce a two-step modeling procedure that for the first time ensures a continuous dependence on L, magnetic local time and activity, parametrized by the solar wind dynamic pressure. We propose two methods to reconstruct equatorial electron flux using the model. The first approach requires two uni-directional flux observations and is applicable to low-PA data. The second method can be used to reconstruct the full equatorial PADs from a single uni- or omnidirectional measurement at off-equatorial latitudes. The model can be used for converting the long-term data sets of electron fluxes to phase space density in terms of adiabatic invariants, for physics-based modeling in the form of boundary conditions, and for data assimilation purposes.
A large part of classical visual psychophysics was concerned with the fundamental question of how pattern information is initially encoded in the human visual system. From these studies a relatively standard model of early spatial vision emerged, based on spatial frequency and orientation-specific channels followed by an accelerating nonlinearity and divisive normalization: contrast gain-control. Here we implement such a model in an image-computable way, allowing it to take arbitrary luminance images as input. Testing our implementation on classical psychophysical data, we find that it explains contrast detection data including the ModelFest data, contrast discrimination data, and oblique masking data, using a single set of parameters. Leveraging the advantage of an image-computable model, we test our model against a recent dataset using natural images as masks. We find that the model explains these data reasonably well, too. To explain data obtained at different presentation durations, our model requires different parameters to achieve an acceptable fit. In addition, we show that contrast gain-control with the fitted parameters results in a very sparse encoding of luminance information, in line with notions from efficient coding. Translating the standard early spatial vision model to be image-computable resulted in two further insights: First, the nonlinear processing requires a denser sampling of spatial frequency and orientation than optimal coding suggests. Second, the normalization needs to be fairly local in space to fit the data obtained with natural image masks. Finally, our image-computable model can serve as tool in future quantitative analyses: It allows optimized stimuli to be used to test the model and variants of it, with potential applications as an image-quality metric. In addition, it may serve as a building block for models of higher level processing.
We study populations of globally coupled noisy rotators (oscillators with inertia) allowing a nonequilibrium transition from a desynchronized state to a synchronous one (with the nonvanishing order parameter). The newly developed analytical approaches resulted in solutions describing the synchronous state with constant order parameter for weakly inertial rotators, including the case of zero inertia, when the model is reduced to the Kuramoto model of coupled noise oscillators. These approaches provide also analytical criteria distinguishing supercritical and subcritical transitions to the desynchronized state and indicate the universality of such transitions in rotator ensembles. All the obtained analytical results are confirmed by the numerical ones, both by direct simulations of the large ensembles and by solution of the associated Fokker-Planck equation. We also propose generalizations of the developed approaches for setups where different rotators parameters (natural frequencies, masses, noise intensities, strengths and phase shifts in coupling) are dispersed.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
Winter storms are the most costly natural hazard for European residential property. We compare four distinct storm damage functions with respect to their forecast accuracy and variability, with particular regard to the most severe winter storms. The analysis focuses on daily loss estimates under differing spatial aggregation, ranging from district to country level. We discuss the broad and heavily skewed distribution of insured losses posing difficulties for both the calibration and the evaluation of damage functions. From theoretical considerations, we provide a synthesis between the frequently discussed cubic wind–damage relationship and recent studies that report much steeper damage functions for European winter storms. The performance of the storm loss models is evaluated for two sources of wind gust data, direct observations by the German Weather Service and ERA-Interim reanalysis data. While the choice of gust data has little impact on the evaluation of German storm loss, spatially resolved coefficients of variation reveal dependence between model and data choice. The comparison shows that the probabilistic models by Heneka et al. (2006) and Prahl et al. (2012) both provide accurate loss predictions for moderate to extreme losses, with generally small coefficients of variation. We favour the latter model in terms of model applicability. Application of the versatile deterministic model by Klawa and Ulbrich (2003) should be restricted to extreme loss, for which it shows the least bias and errors comparable to the probabilistic model by Prahl et al. (2012).
There is no consensus on which statistical model estimates school value-added (VA) most accurately. To date, the two most common statistical models used for the calculation of VA scores are two classical methods: linear regression and multilevel models. These models have the advantage of being relatively transparent and thus understandable for most researchers and practitioners. However, these statistical models are bound to certain assumptions (e.g., linearity) that might limit their prediction accuracy. Machine learning methods, which have yielded spectacular results in numerous fields, may be a valuable alternative to these classical models. Although big data is not new in general, it is relatively new in the realm of social sciences and education. New types of data require new data analytical approaches. Such techniques have already evolved in fields with a long tradition in crunching big data (e.g., gene technology). The objective of the present paper is to competently apply these "imported" techniques to education data, more precisely VA scores, and assess when and how they can extend or replace the classical psychometrics toolbox. The different models include linear and non-linear methods and extend classical models with the most commonly used machine learning methods (i.e., random forest, neural networks, support vector machines, and boosting). We used representative data of 3,026 students in 153 schools who took part in the standardized achievement tests of the Luxembourg School Monitoring Program in grades 1 and 3. Multilevel models outperformed classical linear and polynomial regressions, as well as different machine learning models. However, it could be observed that across all schools, school VA scores from different model types correlated highly. Yet, the percentage of disagreements as compared to multilevel models was not trivial and real-life implications for individual schools may still be dramatic depending on the model type used. Implications of these results and possible ethical concerns regarding the use of machine learning methods for decision-making in education are discussed.
The economic assessment of the impacts of storm surges and sea-level rise in coastal cities requires high-level information on the damage and protection costs associated with varying flood heights. We provide a systematically and consistently calculated dataset of macroscale damage and protection cost curves for the 600 largest European coastal cities opening the perspective for a wide range of applications. Offering the first comprehensive dataset to include the costs of dike protection, we provide the underpinning information to run comparative assessments of costs and benefits of coastal adaptation. Aggregate cost curves for coastal flooding at the city-level are commonly regarded as by-products of impact assessments and are generally not published as a standalone dataset. Hence, our work also aims at initiating a more critical discussion on the availability and derivation of cost curves.
Flash floods are caused by intense rainfall events and represent an insufficiently understood phenomenon in Germany. As a result of higher precipitation intensities, flash floods might occur more frequently in future. In combination with changing land use patterns and urbanisation, damage mitigation, insurance and risk management in flash-flood-prone regions are becoming increasingly important. However, a better understanding of damage caused by flash floods requires ex post collection of relevant but yet sparsely available information for research. At the end of May 2016, very high and concentrated rainfall intensities led to severe flash floods in several southern German municipalities. The small town of Braunsbach stood as a prime example of the devastating potential of such events. Eight to ten days after the flash flood event, damage assessment and data collection were conducted in Braunsbach by investigating all affected buildings and their surroundings. To record and store the data on site, the open-source software bundle KoBoCollect was used as an efficient and easy way to gather information. Since the damage driving factors of flash floods are expected to differ from those of riverine flooding, a post-hoc data analysis was performed, aiming to identify the influence of flood processes and building attributes on damage grades, which reflect the extent of structural damage. Data analyses include the application of random forest, a random general linear model and multinomial logistic regression as well as the construction of a local impact map to reveal influences on the damage grades. Further, a Spearman's Rho correlation matrix was calculated. The results reveal that the damage driving factors of flash floods differ from those of riverine floods to a certain extent. The exposition of a building in flow direction shows an especially strong correlation with the damage grade and has a high predictive power within the constructed damage models. Additionally, the results suggest that building materials as well as various building aspects, such as the existence of a shop window and the surroundings, might have an effect on the resulting damage. To verify and confirm the outcomes as well as to support future mitigation strategies, risk management and planning, more comprehensive and systematic data collection is necessary.
Most climate change impacts manifest in the form of natural hazards. Damage assessment typically relies on damage functions that translate the magnitude of extreme events to a quantifiable damage. In practice, the availability of damage functions is limited due to a lack of data sources and a lack of understanding of damage processes. The study of the characteristics of damage functions for different hazards could strengthen the theoretical foundation of damage functions and support their development and validation. Accordingly, we investigate analogies of damage functions for coastal flooding and for wind storms and identify a unified approach. This approach has general applicability for granular portfolios and may also be applied, for example, to heat-related mortality. Moreover, the unification enables the transfer of methodology between hazards and a consistent treatment of uncertainty. This is demonstrated by a sensitivity analysis on the basis of two simple case studies (for coastal flood and storm damage). The analysis reveals the relevance of the various uncertainty sources at varying hazard magnitude and on both the microscale and the macroscale level. Main findings are the dominance of uncertainty from the hazard magnitude and the persistent behaviour of intrinsic uncertainties on both scale levels. Our results shed light on the general role of uncertainties and provide useful insight for the application of the unified approach.
Even if greenhouse gas emissions were stopped today, sea level would continue to rise for centuries, with the long-term sea-level commitment of a 2 degrees C warmer world significantly exceeding 2 m. In view of the potential implications for coastal populations and ecosystems worldwide, we investigate, from an ice-dynamic perspective, the possibility of delaying sea-level rise by pumping ocean water onto the surface of the Antarctic ice sheet. We find that due to wave propagation ice is discharged much faster back into the ocean than would be expected from a pure advection with surface velocities. The delay time depends strongly on the distance from the coastline at which the additional mass is placed and less strongly on the rate of sea-level rise that is mitigated. A millennium-scale storage of at least 80% of the additional ice requires placing it at a distance of at least 700 km from the coastline. The pumping energy required to elevate the potential energy of ocean water to mitigate the currently observed 3 mmyr(-1) will exceed 7% of the current global primary energy supply. At the same time, the approach offers a comprehensive protection for entire coastlines particularly including regions that cannot be protected by dikes.
In older persons, the origin of malnutrition is often multifactorial with a multitude of factors involved. Presently, a common understanding about potential causes and their mode of action is lacking, and a consensus on the theoretical framework on the etiology of malnutrition does not exist. Within the European Knowledge Hub "Malnutrition in the Elderly (MaNuEL)," a model of "Determinants of Malnutrition in Aged Persons" (DoMAP) was developed in a multistage consensus process with live meetings and written feedback (modified Delphi process) by a multiprofessional group of 33 experts in geriatric nutrition. DoMAP consists of three triangle-shaped levels with malnutrition in the center, surrounded by the three principal conditions through which malnutrition develops in the innermost level: low intake, high requirements, and impaired nutrient bioavailability. The middle level consists of factors directly causing one of these conditions, and the outermost level contains factors indirectly causing one of the three conditions through the direct factors. The DoMAP model may contribute to a common understanding about the multitude of factors involved in the etiology of malnutrition, and about potential causative mechanisms. It may serve as basis for future research and may also be helpful in clinical routine to identify persons at increased risk of malnutrition.
Robust appraisals of climate impacts at different levels of global-mean temperature increase are vital to guide assessments of dangerous anthropogenic interference with the climate system. The 2015 Paris Agreement includes a two-headed temperature goal: "holding the increase in the global average temperature to well below 2 degrees C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5 degrees C". Despite the prominence of these two temperature limits, a comprehensive overview of the differences in climate impacts at these levels is still missing. Here we provide an assessment of key impacts of climate change at warming levels of 1.5 degrees C and 2 degrees C, including extreme weather events, water availability, agricultural yields, sea-level rise and risk of coral reef loss. Our results reveal substantial differences in impacts between a 1.5 degrees C and 2 degrees C warming that are highly relevant for the assessment of dangerous anthropogenic interference with the climate system. For heat-related extremes, the additional 0.5 degrees C increase in global-mean temperature marks the difference between events at the upper limit of present-day natural variability and a new climate regime, particularly in tropical regions. Similarly, this warming difference is likely to be decisive for the future of tropical coral reefs. In a scenario with an end-of-century warming of 2 degrees C, virtually all tropical coral reefs are projected to be at risk of severe degradation due to temperature-induced bleaching from 2050 onwards. This fraction is reduced to about 90% in 2050 and projected to decline to 70% by 2100 for a 1.5 degrees C scenario. Analyses of precipitation-related impacts reveal distinct regional differences and hot-spots of change emerge. Regional reduction in median water availability for the Mediterranean is found to nearly double from 9% to 17% between 1.5 degrees C and 2 degrees C, and the projected lengthening of regional dry spells increases from 7 to 11%. Projections for agricultural yields differ between crop types as well as world regions. While some (in particular high-latitude) regions may benefit, tropical regions like West Africa, South-East Asia, as well as Central and northern South America are projected to face substantial local yield reductions, particularly for wheat and maize. Best estimate sea-level rise projections based on two illustrative scenarios indicate a 50cm rise by 2100 relative to year 2000-levels for a 2 degrees C scenario, and about 10 cm lower levels for a 1.5 degrees C scenario. In a 1.5 degrees C scenario, the rate of sea-level rise in 2100 would be reduced by about 30% compared to a 2 degrees C scenario. Our findings highlight the importance of regional differentiation to assess both future climate risks and different vulnerabilities to incremental increases in global-mean temperature. The article provides a consistent and comprehensive assessment of existing projections and a good basis for future work on refining our understanding of the difference between impacts at 1.5 degrees C and 2 degrees C warming.
Soil properties show high heterogeneity at different spatial scales and their correct characterization remains a crucial challenge over large areas. The aim of the study is to quantify the impact of different types of uncertainties that arise from the unresolved soil spatial variability on simulated hydrological states and fluxes. Three perturbation methods are presented for the characterization of uncertainties in soil properties. The methods are applied on the soil map of the upper Neckar catchment (Germany), as an example. The uncertainties are propagated through the distributed mesoscale hydrological model (mHM) to assess the impact on the simulated states and fluxes. The model outputs are analysed by aggregating the results at different spatial and temporal scales. These results show that the impact of the different uncertainties introduced in the original soil map is equivalent when the simulated model outputs are analysed at the model grid resolution (i.e. 500 m). However, several differences are identified by aggregating states and fluxes at different spatial scales (by subcatchments of different sizes or coarsening the grid resolution). Streamflow is only sensitive to the perturbation of long spatial structures while distributed states and fluxes (e.g. soil moisture and groundwater recharge) are only sensitive to the local noise introduced to the original soil properties. A clear identification of the temporal and spatial scale for which finer-resolution soil information is (or is not) relevant is unlikely to be universal. However, the comparison of the impacts on the different hydrological components can be used to prioritize the model improvements in specific applications, either by collecting new measurements or by calibration and data assimilation approaches. In conclusion, the study underlines the importance of a correct characterization of uncertainty in soil properties. With that, soil maps with additional information regarding the unresolved soil spatial variability would provide strong support to hydrological modelling applications.
Ermittlung historischer Parameter eines kleinen Einzugsgebietes am Beispiel des Pfefferfließes
(2010)
Am Beispiel eines Fließgewässers (Pfefferfließ) wurde unter Verwendung verschiedener Methoden die hydrologische Situation eines naturnahen Zustandes des 18. Jh. dargestellt bzw. ermittelt. Die Grundlage zur Ermittlung eines naturnahen Zustandes des 18. Jh. waren historische Daten wie z.B. Karten, Handschriften, Meliorationspläne. Die Detektierung bzw. Aufnahme historischer Querschnitte sowie die Modellierung des Abflusses im 18 Jh. tragen ebenfalls zu einer Generierung des Gesamtbildes im 18.Jh. bei. Die aus diesen Daten gewonnenen Erkenntnisse wurden auf die weitere Anwendung als Leitbild für Renaturierungsmaßnahmen überprüft.