Refine
Year of publication
Document Type
- Doctoral Thesis (15)
- Article (13)
- Postprint (3)
- Habilitation Thesis (1)
Is part of the Bibliography
- yes (32) (remove)
Keywords
- modelling (32) (remove)
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
ArcticBeach v1.0
(2022)
In the Arctic, air temperatures are increasing and sea ice is declining, resulting in larger waves and a longer open water season, all of which intensify the thaw and erosion of ice-rich coasts. Climate change has been shown to increase the rate of Arctic coastal erosion, causing problems for Arctic cultural heritage, existing industrial, military, and civil infrastructure, as well as changes in nearshore biogeochemistry. Numerical models that reproduce historical and project future Arctic erosion rates are necessary to understand how further climate change will affect these problems, and no such model yet exists to simulate the physics of erosion on a pan-Arctic scale. We have coupled a bathystrophic storm surge model to a simplified physical erosion model of a permafrost coastline. This Arctic erosion model, called ArcticBeach v1.0, is a first step toward a physical parameterization of Arctic shoreline erosion for larger-scale models. It is forced by wind speed and direction, wave period and height, sea surface temperature, all of which are masked during times of sea ice cover near the coastline. Model tuning requires observed historical retreat rates (at least one value), as well as rough nearshore bathymetry. These parameters are already available on a pan-Arctic scale. The model is validated at three study sites at 1) Drew Point (DP), Alaska, 2) Mamontovy Khayata (MK), Siberia, and 3) Veslebogen Cliffs, Svalbard. Simulated cumulative retreat rates for DP and MK respectively (169 and 170 m) over the time periods studied at each site (2007-2016, and 1995-2018) are found to the same order of magnitude as observed cumulative retreat (172 and 120 m). The rocky Veslebogen cliffs have small observed cumulative retreat rates (0.05 m over 2014-2016), and our model was also able to reproduce this same order of magnitude of retreat (0.08 m). Given the large differences in geomorphology between the study sites, this study provides a proof-of-concept that ArcticBeach v1.0 can be applied on very different permafrost coastlines. ArcticBeach v1.0 provides a promising starting point to project retreat of Arctic shorelines, or to evaluate historical retreat in places that have had few observations.
BEEHAVE offers a valuable tool for researchers to design and focus field experiments, for regulators to explore the relative importance of stressors to devise management and policy advice and for beekeepers to understand and predict varroa dynamics and effects of management interventions. We expect that scientists and stakeholders will find a variety of applications for BEEHAVE, stimulating further model development and the possible inclusion of other stressors of potential importance to honeybee colony dynamics.
Lake ecosystems across the globe have responded to climate warming of recent decades. However, correctly attributing observed changes to altered climatic conditions is complicated by multiple anthropogenic influences on lakes. This thesis contributes to a better understanding of climate impacts on freshwater phytoplankton, which forms the basis of the food chain and decisively influences water quality. The analyses were, for the most part, based on a long-term data set of physical, chemical and biological variables of a shallow, polymictic lake in north-eastern Germany (Müggelsee), which was subject to a simultaneous change in climate and trophic state during the past three decades. Data analysis included constructing a dynamic simulation model, implementing a genetic algorithm to parameterize models, and applying statistical techniques of classification tree and time-series analysis. Model results indicated that climatic factors and trophic state interactively determine the timing of the phytoplankton spring bloom (phenology) in shallow lakes. Under equally mild spring conditions, the phytoplankton spring bloom collapsed earlier under high than under low nutrient availability, due to a switch from a bottom-up driven to a top-down driven collapse. A novel approach to model phenology proved useful to assess the timings of population peaks in an artificially forced zooplankton-phytoplankton system. Mimicking climate warming by lengthening the growing period advanced algal blooms and consequently also peaks in zooplankton abundance. Investigating the reasons for the contrasting development of cyanobacteria during two recent summer heat wave events revealed that anomalously hot weather did not always, as often hypothesized, promote cyanobacteria in the nutrient-rich lake studied. The seasonal timing and duration of heat waves determined whether critical thresholds of thermal stratification, decisive for cyanobacterial bloom formation, were crossed. In addition, the temporal patterns of heat wave events influenced the summer abundance of some zooplankton species, which as predators may serve as a buffer by suppressing phytoplankton bloom formation. This thesis adds to the growing body of evidence that lake ecosystems have strongly responded to climatic changes of recent decades. It reaches beyond many previous studies of climate impacts on lakes by focusing on underlying mechanisms and explicitly considering multiple environmental changes. Key findings show that climate impacts are more severe in nutrient-rich than in nutrient-poor lakes. Hence, to develop lake management plans for the future, limnologists need to seek a comprehensive, mechanistic understanding of overlapping effects of the multi-faceted human footprint on aquatic ecosystems.
Complete protection against flood risks by structural measures is impossible. Therefore flood prediction is important for flood risk management. Good explanatory power of flood models requires a meaningful representation of bio-physical processes. Therefore great interest exists to improve the process representation. Progress in hydrological process understanding is achieved through a learning cycle including critical assessment of an existing model for a given catchment as a first step. The assessment will highlight deficiencies of the model, from which useful additional data requirements are derived, giving a guideline for new measurements. These new measurements may in turn lead to improved process concepts. The improved process concepts are finally summarized in an updated hydrological model. In this thesis I demonstrate such a learning cycle, focusing on the advancement of model evaluation methods and more cost effective measurements. For a successful model evaluation, I propose that three questions should be answered: 1) when is a model reproducing observations in a satisfactory way? 2) If model results deviate, of what nature is the difference? And 3) what are most likely the relevant model components affecting these differences? To answer the first two questions, I developed a new method to assess the temporal dynamics of model performance (or TIGER - TIme series of Grouped Errors). This method is powerful in highlighting recurrent patterns of insufficient model behaviour for long simulation periods. I answered the third question with the analysis of the temporal dynamics of parameter sensitivity (TEDPAS). For calculating TEDPAS, an efficient method for sensitivity analysis is necessary. I used such an efficient method called Fourier Amplitude Sensitivity Test, which has a smart sampling scheme. Combining the two methods TIGER and TEDPAS provided a powerful tool for model assessment. With WaSiM-ETH applied to the Weisseritz catchment as a case study, I found insufficient process descriptions for the snow dynamics and for the recession during dry periods in late summer and fall. Focusing on snow dynamics, reasons for poor model performance can either be a poor representation of snow processes in the model, or poor data on snow cover, or both. To obtain an improved data set on snow cover, time series of snow height and temperatures were collected with a cost efficient method based on temperature measurements on multiple levels at each location. An algorithm was developed to simultaneously estimate snow height and cold content from these measurements. Both, snow height and cold content are relevant quantities for spring flood forecasting. Spatial variability was observed at the local and the catchment scale with an adjusted sampling design. At the local scale, samples were collected on two perpendicular transects of 60 m length and analysed with geostatistical methods. The range determined from fitted theoretical variograms was within the range of the sampling design for 80% of the plots. No patterns were found, that would explain the random variability and spatial correlation at the local scale. At the watershed scale, locations of the extensive field campaign were selected according to a stratified sample design to capture the combined effects of elevation, aspect and land use. The snow height is mainly affected by the plot elevation. The expected influence of aspect and land use was not observed. To better understand the deficiencies of the snow module in WaSiM-ETH, the same approach, a simple degree day model was checked for its capability to reproduce the data. The degree day model was capable to explain the temporal variability for plots with a continuous snow pack over the entire snow season, if parameters were estimated for single plots. However, processes described in the simple model are not sufficient to represent multiple accumulation-melt-cycles, as observed for the lower catchment. Thus, the combined spatio-temporal variability at the watershed scale is not captured by the model. Further tests on improved concepts for the representation of snow dynamics at the Weißeritz are required. From the data I suggest to include at least rain on snow and redistribution by wind as additional processes to better describe spatio-temporal variability. Alternatively an energy balance snow model could be tested. Overall, the proposed learning cycle is a useful framework for targeted model improvement. The advanced model diagnostics is valuable to identify model deficiencies and to guide field measurements. The additional data collected throughout this work helps to get a deepened understanding of the processes in the Weisseritz catchment.
A comet is a highly dynamic object, undergoing a permanent state of change. These changes have to be carefully classified and considered according to their intrinsic temporal and spatial scales. The Rosetta mission has, through its contiguous in-situ and remote sensing coverage of comet 67P/Churyumov-Gerasimenko (hereafter 67P) over the time span of August 2014 to September 2016, monitored the emergence, culmination, and winding down of the gas and dust comae. This provided an unprecedented data set and has spurred a large effort to connect in-situ and remote sensing measurements to the surface. In this review, we address our current understanding of cometary activity and the challenges involved when linking comae data to the surface. We give the current state of research by describing what we know about the physical processes involved from the surface to a few tens of kilometres above it with respect to the gas and dust emission from cometary nuclei. Further, we describe how complex multidimensional cometary gas and dust models have developed from the Halley encounter of 1986 to today. This includes the study of inhomogeneous outgassing and determination of the gas and dust production rates. Additionally, the different approaches used and results obtained to link coma data to the surface will be discussed. We discuss forward and inversion models and we describe the limitations of the respective approaches. The current literature suggests that there does not seem to be a single uniform process behind cometary activity. Rather, activity seems to be the consequence of a variety of erosion processes, including the sublimation of both water ice and more volatile material, but possibly also more exotic processes such as fracture and cliff erosion under thermal and mechanical stress, sub-surface heat storage, and a complex interplay of these processes. Seasons and the nucleus shape are key factors for the distribution and temporal evolution of activity and imply that the heliocentric evolution of activity can be highly individual for every comet, and generalisations can be misleading.
Biomimicry is the art of mimicking nature to overcome a particular technical or scientific challenge. The approach studies how evolution has found solutions to the most complex problems in nature. This makes it a powerful method for science. In combination with the rapid development of manufacturing and information technologies into the digital age, structures and material that were before thought to be unrealizable can now be created with simple sketch and the touch of a button. This doctoral thesis had as its primary goal to investigate how digital tools, such as programming, modelling, 3D-Design tools and 3D-Printing, with the help from biomimicry, could lead to new analysis methods in science and new medical devices in medicine.
The Electrical Discharge Machining (EDM) process is applied commonly to deform or mold hard metals that are difficult to work using normal machinery. A workpiece submerged in an electrolyte is deformed while being in close vicinity to an electrode. When high voltage is put between the workpiece and the electrode it will cause sparks that create cavitations on the substrate which in turn removes material and is flushed away by the electrolyte. Usually, such surfaces are analysed based on roughness, in this work another method using a novel curvature analysis method is presented as an alternative. In addition, to better understand how the surface changes during process time of the EDM process, a digital impact model was created which created craters on ridges on an originally flat substrate. These substrates were then analysed using the curvature analysis method at different processing times of the modelling. It was found that a substrate reaches an equilibrium at around 10000 impacts. The proposed curvature analysis method has potential to be used in the design of new cell culture substrates for stem cell.
The Venus flytrap can shut its jaws at an amazing speed. The shutting mechanism may be interesting to use in science and is an example of a so-called mechanical bi-stable system – there are two stable states. In this work two truncated pyramid structures were modelled using a non-linear mechanical model called the Chained Beam Constraint Model (CBCM). The structure with a slope angle of 30 degrees is not bi-stable and the structure with a slope angle of 45 degrees is bi-stable. Developing this idea further by using PEVA, which has a shape-memory effect, the structure which is not bi-stable could be programmed to be bi-stable and then turned off again. This could be used as an energy storage system. Another species which has interesting mechanism is the tapeworm. Some species of this animal has a crown of hooks and suckers located on its side. The parasite commonly is found in mammals in the lower intestine and attaches to the walls by using its suckers. When the tapeworm has found a suitable spot, it ejects its hooks and permanently attaches to the wall. This function could be used in minimally invasive medicine to have better control of implants during the implantation process. By using the CBCM model and a 3D-printer capable of tuning how hard or soft a printed part is, a design strategy was developed to investigate how one could create a device that mimics the tapeworm. In the end a prototype was created which was able attach to a pork loin at an under pressure of 20 kPa and to ejects its hooks at an under pressure of 50 kPa or above.
These three projects is an exhibit of how digital tools and biomimicry can be used together to come up with applicable solutions in science and in medicine.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
Over the past decades, floods have caused significant financial losses in Turkey, amounting to US$ 800 million between 1960 and 2014. With the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), it is aimed to reduce the direct economic loss from disasters in relation to the global gross domestic product (GDP) by 2030. Accordingly, a methodology based on experiences from developing countries was proposed by the United Nations Office for Disaster Risk Reduction (UNDRR) to estimate direct economic losses on the macro-scale. Since Turkey also signed the SFDRR, we aimed to adapt, validate and apply the loss estimation model proposed by the UNDRR in Turkey for the first time. To do so, the well-documented flood event in Mersin of 2016 was used to calibrate the damage ratios for the agricultural, commercial and residential sectors, as well as educational facilities. Case studies between 2015 and 2020 with documented losses were further used to validate the model. Finally, model applications provided initial loss estimates for floods occurred recently in Turkey. Despite the limited event documentation for each sector, the calibrated model yielded good results when compared to documented losses. Thus, by implementing the UNDRR method, this study provides an approach to estimate the direct economic losses in Turkey on the macro-scale, which can be used to fill gaps in event databases, support the coordination of financial aid after flood events and facilitate monitoring of the progress toward and achievement of Global Target C of the Sendai Framework for Disaster Risk Reduction 2015-2030.
Borehole leakage is a common and complex issue. Understanding the fluid flow characteristics of a cemented area inside a borehole is crucial to monitor and quantify the wellbore integrity as well as to find solutions to minimise existing leakages. In order to improve our understanding of the flow behaviour of cemented boreholes, we investigated experimental data of a large-scale borehole leakage tests by means of numerical modelling using three different conceptual models. The experiment was performed with an autoclave system consisting of two vessels bridged by a cement-filled casing. After a partial bleed-off at the well-head, a sustained casing pressure was observed due to fluid flow through the cementsteel composite. The aim of our simulations is to investigate and quantify the permeability of the cement-steel composite. From our model results, we conclude that the flow occurred along a preferential flow path at the cement-steel interface. Thus, the inner part of the cement core was impermeable during the duration of the experiment. The preferential flow path can be described as a highly permeable and highly porous area with an aperture of about 5 mu m and a permeability of 3 . 10(-12) m(2) (3 Darcy). It follows that the fluid flow characteristics of a cemented area inside a borehole cannot be described using one permeability value for the entire cement-steel composite. Furthermore, it can be concluded that the quality of the cement and the filling process regarding the cement-steel interface is crucial to minimize possible well leakages.