Refine
Year of publication
Document Type
- Doctoral Thesis (13)
- Article (10)
- Postprint (4)
- Bachelor Thesis (1)
- Monograph/Edited Volume (1)
- Other (1)
Is part of the Bibliography
- yes (30)
Keywords
- simulation (30) (remove)
Institute
- Institut für Biochemie und Biologie (5)
- Institut für Geowissenschaften (5)
- Institut für Physik und Astronomie (5)
- Institut für Umweltwissenschaften und Geographie (3)
- Hasso-Plattner-Institut für Digital Engineering GmbH (2)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Mathematisch-Naturwissenschaftliche Fakultät (2)
- Wirtschaftswissenschaften (2)
- Department Psychologie (1)
- Extern (1)
Situated in an active tectonic region, Santiago de Chile, the country´s capital with more than six million inhabitants, faces tremendous earthquake hazard. Macroseismic data for the 1985 Valparaiso and the 2010 Maule events show large variations in the distribution of damage to buildings within short distances indicating strong influence of local sediments and the shape of the sediment-bedrock interface on ground motion. Therefore, a temporary seismic network was installed in the urban area for recording earthquake activity, and a study was carried out aiming to estimate site amplification derived from earthquake data and ambient noise. The analysis of earthquake data shows significant dependence on the local geological structure with regards to amplitude and duration. Moreover, the analysis of noise spectral ratios shows that they can provide a lower bound in amplitude for site amplification and, since no variability in terms of time and amplitude is observed, that it is possible to map the fundamental resonance frequency of the soil for a 26 km x 12 km area in the northern part of the Santiago de Chile basin. By inverting the noise spectral rations, local shear wave velocity profiles could be derived under the constraint of the thickness of the sedimentary cover which had previously been determined by gravimetric measurements. The resulting 3D model was derived by interpolation between the single shear wave velocity profiles and shows locally good agreement with the few existing velocity profile data, but allows the entire area, as well as deeper parts of the basin, to be represented in greater detail. The wealth of available data allowed further to check if any correlation between the shear wave velocity in the uppermost 30 m (vs30) and the slope of topography, a new technique recently proposed by Wald and Allen (2007), exists on a local scale. While one lithology might provide a greater scatter in the velocity values for the investigated area, almost no correlation between topographic gradient and calculated vs30 exists, whereas a better link is found between vs30 and the local geology. When comparing the vs30 distribution with the MSK intensities for the 1985 Valparaiso event it becomes clear that high intensities are found where the expected vs30 values are low and over a thick sedimentary cover. Although this evidence cannot be generalized for all possible earthquakes, it indicates the influence of site effects modifying the ground motion when earthquakes occur well outside of the Santiago basin. Using the attained knowledge on the basin characteristics, simulations of strong ground motion within the Santiago Metropolitan area were carried out by means of the spectral element technique. The simulation of a regional event, which has also been recorded by a dense network installed in the city of Santiago for recording aftershock activity following the 27 February 2010 Maule earthquake, shows that the model is capable to realistically calculate ground motion in terms of amplitude, duration, and frequency and, moreover, that the surface topography and the shape of the sediment bedrock interface strongly modify ground motion in the Santiago basin. An examination on the dependency of ground motion on the hypocenter location for a hypothetical event occurring along the active San Ramón fault, which is crossing the eastern outskirts of the city, shows that the unfavorable interaction between fault rupture, radiation mechanism, and complex geological conditions in the near-field may give rise to large values of peak ground velocity and therefore considerably increase the level of seismic risk for Santiago de Chile.
Advanced mechatronic systems have to integrate existing technologies from mechanical, electrical and software engineering. They must be able to adapt their structure and behavior at runtime by reconfiguration to react flexibly to changes in the environment. Therefore, a tight integration of structural and behavioral models of the different domains is required. This integration results in complex reconfigurable hybrid systems, the execution logic of which cannot be addressed directly with existing standard modeling, simulation, and code-generation techniques. We present in this paper how our component-based approach for reconfigurable mechatronic systems, M ECHATRONIC UML, efficiently handles the complex interplay of discrete behavior and continuous behavior in a modular manner. In addition, its extension to even more flexible reconfiguration cases is presented.
Advanced mechatronic systems have to integrate existing technologies from mechanical, electrical and software engineering. They must be able to adapt their structure and behavior at runtime by reconfiguration to react flexibly to changes in the environment. Therefore, a tight integration of structural and behavioral models of the different domains is required. This integration results in complex reconfigurable hybrid systems, the execution logic of which cannot be addressed directly with existing standard modeling, simulation, and code-generation techniques. We present in this paper how our component-based approach for reconfigurable mechatronic systems, MECHATRONIC UML, efficiently handles the complex interplay of discrete behavior and continuous behavior in a modular manner. In addition, its extension to even more flexible reconfiguration cases is presented.
Air pollution is the number one environmental cause of premature deaths in Europe. Despite extensive regulations, air pollution remains a challenge, especially in urban areas. For studying summertime air quality in the Berlin-Brandenburg region of Germany, the Weather Research and Forecasting Model with Chemistry (WRF-Chem) is set up and evaluated against meteorological and air quality observations from monitoring stations as well as from a field campaign conducted in 2014. The objective is to assess which resolution and level of detail in the input data is needed for simulating urban background air pollutant concentrations and their spatial distribution in the Berlin-Brandenburg area. The model setup includes three nested domains with horizontal resolutions of 15, 3 and 1 km and anthropogenic emissions from the TNO-MACC III inventory. We use RADM2 chemistry and the MADE/SORGAM aerosol scheme. Three sensitivity simulations are conducted updating input parameters to the single-layer urban canopy model based on structural data for Berlin, specifying land use classes on a sub-grid scale (mosaic option) and downscaling the original emissions to a resolution of ca. 1 km x 1 km for Berlin based on proxy data including traffic density and population density. The results show that the model simulates meteorology well, though urban 2m temperature and urban wind speeds are biased high and nighttime mixing layer height is biased low in the base run with the settings described above. We show that the simulation of urban meteorology can be improved when specifying the input parameters to the urban model, and to a lesser extent when using the mosaic option. On average, ozone is simulated reasonably well, but maximum daily 8 h mean concentrations are underestimated, which is consistent with the results from previous modelling studies using the RADM2 chemical mechanism. Particulate matter is underestimated, which is partly due to an underestimation of secondary organic aerosols. NOx (NO + NO2) concentrations are simulated reasonably well on average, but nighttime concentrations are overestimated due to the model's underestimation of the mixing layer height, and urban daytime concentrations are underestimated. The daytime underestimation is improved when using downscaled, and thus locally higher emissions, suggesting that part of this bias is due to deficiencies in the emission input data and their resolution. The results further demonstrate that a horizontal resolution of 3 km improves the results and spatial representativeness of the model compared to a horizontal resolution of 15 km. With the input data (land use classes, emissions) at the level of detail of the base run of this study, we find that a horizontal resolution of 1 km does not improve the results compared to a resolution of 3 km. However, our results suggest that a 1 km horizontal model resolution could enable a detailed simulation of local pollution patterns in the Berlin-Brandenburg region if the urban land use classes, together with the respective input parameters to the urban canopy model, are specified with a higher level of detail and if urban emissions of higher spatial resolution are used.
Pattern-process analysis is one of the main threads in landscape ecological research. It aims at understanding the complex relationships between ecological processes and landscape patterns, identifying the underlying mechanisms and deriving valid predictions for scenarios of landscape change and its consequences. Today, various studies cope with these tasks through so called "landscape modelling" approaches. They integrate different aspects of heterogeneous and dynamic landscapes and model different driving forces, often using both statistical and process-oriented techniques. We identify two main approaches to deal with the analysis of pattern-process interactions: the first starts with pattern detection, pattern description and pattern analysis, the second with process description, simulation and pattern generation. Focussing on the interplay between these two approaches, landscape analysis and landscape modelling will improve our understanding of pattern-process interactions. The comparison of simulated and observed pattern is a prerequisite for both approaches. Therefore, we identify a set of quantitative, robust, and reproducible methods for the analysis of spatiotemporal patterns that is a starting point for a standard toolbox for ecologists as major future challenge and suggest necessary further methodological developments. (c) 2006 Elsevier B.V. All rights reserved.
The correctness of model transformations is a crucial element for model-driven engineering of high quality software. In particular, behavior preservation is the most important correctness property avoiding the introduction of semantic errors during the model-driven engineering process. Behavior preservation verification techniques either show that specific properties are preserved, or more generally and complex, they show some kind of behavioral equivalence or refinement between source and target model of the transformation. Both kinds of behavior preservation verification goals have been presented with automatic tool support for the instance level, i.e. for a given source and target model specified by the model transformation. However, up until now there is no automatic verification approach available at the transformation level, i.e. for all source and target models specified by the model transformation.
In this report, we extend our results presented in [27] and outline a new sophisticated approach for the automatic verification of behavior preservation captured by bisimulation resp. simulation for model transformations specified by triple graph grammars and semantic definitions given by graph transformation rules. In particular, we show that the behavior preservation problem can be reduced to invariant checking for graph transformation and that the resulting checking problem can be addressed by our own invariant checker even for a complex example where a sequence chart is transformed into communicating automata. We further discuss today's limitations of invariant checking for graph transformation and motivate further lines of future work in this direction.
Microswimmers, i.e. swimmers of micron size experiencing low Reynolds numbers, have received a great deal of attention in the last years, since many applications are envisioned in medicine and bioremediation. A promising field is the one of magnetic swimmers, since magnetism is biocom-patible and could be used to direct or actuate the swimmers. This thesis studies two examples of magnetic microswimmers from a physics point of view.
The first system to be studied are magnetic cells, which can be magnetic biohybrids (a swimming cell coupled with a magnetic synthetic component) or magnetotactic bacteria (naturally occurring bacteria that produce an intracellular chain of magnetic crystals). A magnetic cell can passively interact with external magnetic fields, which can be used for direction. The aim of the thesis is to understand how magnetic cells couple this magnetic interaction to their swimming strategies, mainly how they combine it with chemotaxis (the ability to sense external gradient of chemical species and to bias their walk on these gradients). In particular, one open question addresses the advantage given by these magnetic interactions for the magnetotactic bacteria in a natural environment, such as porous sediments. In the thesis, a modified Active Brownian Particle model is used to perform simulations and to reproduce experimental data for different systems such as bacteria swimming in the bulk, in a capillary or in confined geometries. I will show that magnetic fields speed up chemotaxis under special conditions, depending on parameters such as their swimming strategy (run-and-tumble or run-and-reverse), aerotactic strategy (axial or polar), and magnetic fields (intensities and orientations), but it can also hinder bacterial chemotaxis depending on the system.
The second example of magnetic microswimmer are rigid magnetic propellers such as helices or random-shaped propellers. These propellers are actuated and directed by an external rotating magnetic field. One open question is how shape and magnetic properties influence the propeller behavior; the goal of this research field is to design the best propeller for a given situation. The aim of the thesis is to propose a simulation method to reproduce the behavior of experimentally-realized propellers and to determine their magnetic properties. The hydrodynamic simulations are based on the use of the mobility matrix. As main result, I propose a method to match the experimental data, while showing that not only shape but also the magnetic properties influence the propellers swimming characteristics.
One of the rules-of-thumb of colloid and surface physics is that most surfaces are charged when in contact with a solvent, usually water. This is the case, for instance, in charge-stabilized colloidal suspensions, where the surface of the colloidal particles are charged (usually with a charge of hundreds to thousands of e, the elementary charge), monolayers of ionic surfactants sitting at an air-water interface (where the water-loving head groups become charged by releasing counterions), or bilayers containing charged phospholipids (as cell membranes). In this work, we look at some model-systems that, although being a simplified version of reality, are expected to capture some of the physical properties of real charged systems (colloids and electrolytes). We initially study the simple double layer, composed by a charged wall in the presence of its counterions. The charges at the wall are smeared out and the dielectric constant is the same everywhere. The Poisson-Boltzmann (PB) approach gives asymptotically exact counterion density profiles around charged objects in the weak-coupling limit of systems with low-valent counterions, surfaces with low charge density and high temperature (or small Bjerrum length). Using Monte Carlo simulations, we obtain the profiles around the charged wall and compare it with both Poisson-Boltzmann (in the low coupling limit) and the novel strong coupling (SC) theory in the opposite limit of high couplings. In the latter limit, the simulations show that the SC leads in fact to asymptotically correct density profiles. We also compare the Monte Carlo data with previously calculated corrections to the Poisson-Boltzmann theory. We also discuss in detail the methods used to perform the computer simulations. After studying the simple double layer in detail, we introduce a dielectric jump at the charged wall and investigate its effect on the counterion density distribution. As we will show, the Poisson-Boltzmann description of the double layer remains a good approximation at low coupling values, while the strong coupling theory is shown to lead to the correct density profiles close to the wall (and at all couplings). For very large couplings, only systems where the difference between the dielectric constants of the wall and of the solvent is small are shown to be well described by SC. Another experimentally relevant modification to the simple double layer is to make the charges at the plane discrete. The counterions are still assumed to be point-like, but we constraint the distance of approach between ions in the plane and counterions to a minimum distance D. The ratio between D and the distance between neighboring ions in the plane is, as we will see, one of the important quantities in determining the influence of the discrete nature of the charges at the wall over the density profiles. Another parameter that plays an important role, as in the previous case, is the coupling as we will demonstrate, systems with higher coupling are more subject to discretization effects than systems with low coupling parameter. After studying the isolated double layer, we look at the interaction between two double layers. The system is composed by two equally charged walls at distance d, with the counterions confined between them. The charge at the walls is smeared out and the dielectric constant is the same everywhere. Using Monte-Carlo simulations we obtain the inter-plate pressure in the global parameter space, and the pressure is shown to be negative (attraction) at certain conditions. The simulations also show that the equilibrium plate separation (where the pressure changes from attractive to repulsive) exhibits a novel unbinding transition. We compare the Monte Carlo results with the strong-coupling theory, which is shown to describe well the bound states of systems with moderate and high couplings. The regime where the two walls are very close to each other is also shown to be well described by the SC theory. Finally, Using a field-theoretic approach, we derive the exact low-density ("virial") expansion of a binary mixture of positively and negatively charged hard spheres (two-component hard-core plasma, TCPHC). The free energy obtained is valid for systems where the diameters d_+ and d_- and the charge valences q_+ and q_- of positive and negative ions are unconstrained, i.e., the same expression can be used to treat dilute salt solutions (where typically d_+ ~ d_- and q_+ ~ q_-) as well as colloidal suspensions (where the difference in size and valence between macroions and counterions can be very large). We also discuss some applications of our results.
Coarse-grained molecular model for the Glycosylphosphatidylinositol anchor with and without protein
(2020)
Glycosylphosphatidylinositol (GPI) anchors are a unique class of complex glycolipids that anchor a great variety of proteins to the extracellular leaflet of plasma membranes of eukaryotic cells. These anchors can exist either with or without an attached protein called GPI-anchored protein (GPI-AP) both in vitro and in vivo. Although GPIs are known to participate in a broad range of cellular functions, it is to a large extent unknown how these are related to GPI structure and composition. Their conformational flexibility and microheterogeneity make it difficult to study them experimentally. Simplified atomistic models are amenable to all-atom computer simulations in small lipid bilayer patches but not suitable for studying their partitioning and trafficking in complex and heterogeneous membranes. Here, we present a coarse-grained model of the GPI anchor constructed with a modified version of the MARTINI force field that is suited for modeling carbohydrates, proteins, and lipids in an aqueous environment using MARTINI's polarizable water. The nonbonded interactions for sugars were reparametrized by calculating their partitioning free energies between polar and apolar phases. In addition, sugar-sugar interactions were optimized by adjusting the second virial coefficients of osmotic pressures for solutions of glucose, sucrose, and trehalose to match with experimental data. With respect to the conformational dynamics of GPI-anchored green fluorescent protein, the accessible time scales are now at least an order of magnitude larger than for the all-atom system. This is particularly important for fine-tuning the mutual interactions of lipids, carbohydrates, and amino acids when comparing to experimental results. We discuss the prospective use of the coarse-grained GPI model for studying protein-sorting and trafficking in membrane models.
Coarse-grained molecular model for the Glycosylphosphatidylinositol anchor with and without protein
(2020)
Glycosylphosphatidylinositol (GPI) anchors are a unique class of complex glycolipids that anchor a great variety of proteins to the extracellular leaflet of plasma membranes of eukaryotic cells. These anchors can exist either with or without an attached protein called GPI-anchored protein (GPI-AP) both in vitro and in vivo. Although GPIs are known to participate in a broad range of cellular functions, it is to a large extent unknown how these are related to GPI structure and composition. Their conformational flexibility and microheterogeneity make it difficult to study them experimentally. Simplified atomistic models are amenable to all-atom computer simulations in small lipid bilayer patches but not suitable for studying their partitioning and trafficking in complex and heterogeneous membranes. Here, we present a coarse-grained model of the GPI anchor constructed with a modified version of the MARTINI force field that is suited for modeling carbohydrates, proteins, and lipids in an aqueous environment using MARTINI's polarizable water. The nonbonded interactions for sugars were reparametrized by calculating their partitioning free energies between polar and apolar phases. In addition, sugar-sugar interactions were optimized by adjusting the second virial coefficients of osmotic pressures for solutions of glucose, sucrose, and trehalose to match with experimental data. With respect to the conformational dynamics of GPI-anchored green fluorescent protein, the accessible time scales are now at least an order of magnitude larger than for the all-atom system. This is particularly important for fine-tuning the mutual interactions of lipids, carbohydrates, and amino acids when comparing to experimental results. We discuss the prospective use of the coarse-grained GPI model for studying protein-sorting and trafficking in membrane models.
A crucial question facing cognitive science concerns the nature of conceptual representations as well as the constraints on the interactions between them. One specific question we address in this paper is what makes cross-representational interplay possible? We offer two distinct theoretical scenarios: according to the first scenario, co-activated knowledge representations interact with the help of an interface established between them via congruent activation in a mediating third-party general cognitive mechanism, e.g., attention. According to the second scenario, co-activated knowledge representations interact due to an overlap between their features, for example when they share a magnitude component. First, we make a case for cross representational interplay based on grounded and situated theories of cognition. Second, we discuss interface-based interactions between distinct (i.e., non-overlapping) knowledge representations. Third, we discuss how co-activated representations may share their architecture via partial overlap. Finally, we outline constraints regarding the flexibility of these proposed mechanisms.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
Modern production infrastructures of globally operating companies usually consist of multiple distributed production sites. While the organization of individual sites consisting of Industry 4.0 components itself is demanding, new questions regarding the organization and allocation of resources emerge considering the total production network. In an attempt to face the challenge of efficient distribution and processing both within and across sites, we aim to provide a hybrid simulation approach as a first step towards optimization. Using hybrid simulation allows us to include real and simulated concepts and thereby benchmark different approaches with reasonable effort. A simulation concept is conceptualized and demonstrated qualitatively using a global multi-site example.
Diagenetic trends of synthetic reservoir sandstone properties assessed by digital rock physics
(2021)
Quantifying interactions and dependencies among geometric, hydraulic and mechanical properties of reservoir sandstones is of particular importance for the exploration and utilisation of the geological subsurface and can be assessed by synthetic sandstones comprising the microstructural complexity of natural rocks. In the present study, three highly resolved samples of the Fontainebleau, Berea and Bentheim sandstones are generated by means of a process-based approach, which combines the gravity-driven deposition of irregularly shaped grains and their diagenetic cementation by three different schemes. The resulting evolution in porosity, permeability and rock stiffness is examined and compared to the respective micro-computer tomographic (micro-CT) scans. The grain contact-preferential scheme implies a progressive clogging of small throats and consequently produces considerably less connected and stiffer samples than the two other schemes. By contrast, uniform quartz overgrowth continuously alters the pore space and leads to the lowest elastic properties. The proposed stress-dependent cementation scheme combines both approaches of contact-cement and quartz overgrowth, resulting in granulometric, hydraulic and elastic properties equivalent to those of the respective micro-CT scans, where bulk moduli slightly deviate by 0.8%, 4.9% and 2.5% for the Fontainebleau, Berea and Bentheim sandstone, respectively. The synthetic samples can be further altered to examine the impact of mineral dissolution or precipitation as well as fracturing on various petrophysical correlations, which is of particular relevance for numerous aspects of a sustainable subsurface utilisation.
This dissertation consists of five self-contained essays, addressing different aspects of career choices, especially the choice of entrepreneurship, under risk and ambiguity. In Chapter 2, the first essay develops an occupational choice model with boundedly rational agents, who lack information, receive noisy feedback, and are restricted in their decisions by their personality, to analyze and explain puzzling empirical evidence on entrepreneurial decision processes. In the second essay, in Chapter 3, I contribute to the literature on entrepreneurial choice by constructing a general career choice model on the basis of the assumption that outcomes are partially ambiguous. The third essay, in Chapter 4, theoretically and empirically analyzes the impact of media on career choices, where information on entrepreneurship provided by the media is treated as an informational shock affecting prior beliefs. The fourth essay, presented in Chapter 5, contains an empirical analysis of the effects of cyclical macro variables (GDP and unemployment) on innovative start-ups in Germany. In the fifth, and last, essay in Chapter 6, we examine whether information on personality is useful for advice, using the example of career advice.
Evaluating the performance of self-adaptive systems is challenging due to their interactions with often highly dynamic environments. In the specific case of self-healing systems, the performance evaluations of self-healing approaches and their parameter tuning rely on the considered characteristics of failure occurrences and the resulting interactions with the self-healing actions. In this paper, we first study the state-of-the-art for evaluating the performances of self-healing systems by means of a systematic literature review. We provide a classification of different input types for such systems and analyse the limitations of each input type. A main finding is that the employed inputs are often not sophisticated regarding the considered characteristics for failure occurrences. To further study the impact of the identified limitations, we present experiments demonstrating that wrong assumptions regarding the characteristics of the failure occurrences can result in large performance prediction errors, disadvantageous design-time decisions concerning the selection of alternative self-healing approaches, and disadvantageous deployment-time decisions concerning parameter tuning. Furthermore, the experiments indicate that employing multiple alternative input characteristics can help with reducing the risk of premature disadvantageous design-time decisions.
Mechanical and/or chemical removal of material from the subsurface may generate large subsurface cavities, the destabilisation of which can lead to ground collapse and the formation of sinkholes. Numerical simulation of the interaction of cavity growth, host material deformation and overburden collapse is desirable to better understand the sinkhole hazard but is a challenging task due to the involved high strains and material discontinuities. Here, we present 2-D distinct element method numerical simulations of cavity growth and sinkhole development. Firstly, we simulate cavity formation by quasi-static, stepwise removal of material in a single growing zone of an arbitrary geometry and depth. We benchmark this approach against analytical and boundary element method models of a deep void space in a linear elastic material. Secondly, we explore the effects of properties of different uniform materials on cavity stability and sinkhole development. We perform simulated biaxial tests to calibrate macroscopic geotechnical parameters of three model materials representative of those in which sinkholes develop at the Dead Sea shoreline: mud, alluvium and salt. We show that weak materials do not support large cavities, leading to gradual sagging or suffusion-style subsidence. Strong materials support quasi-stable to stable cavities, the overburdens of which may fail suddenly in a caprock or bedrock collapse style. Thirdly, we examine the consequences of layered arrangements of weak and strong materials. We find that these are more susceptible to sinkhole collapse than uniform materials not only due to a lower integrated strength of the overburden but also due to an inhibition of stabilising stress arching. Finally, we compare our model sinkhole geometries to observations at the Ghor Al-Haditha sinkhole site in Jordan. Sinkhole depth ∕ diameter ratios of 0.15 in mud, 0.37 in alluvium and 0.33 in salt are reproduced successfully in the calibrated model materials. The model results suggest that the observed distribution of sinkhole depth ∕ diameter values in each material type may partly reflect sinkhole growth trends.
Background: Recent research reported height biased migration of taller individuals and a Monte Carlo simulation showed that such preferential migration of taller individuals into network hubs can induce a secular trend of height. In the simulation model taller agents in the hubs raise the overall height of all individuals in the network by a community effect. However, it could be seen that the actual network structure influences the strength of this effect. In this paper the background and the influence of the network structure on the strength of the secular trend by migration is investigated. Material and methods: Three principal network types are analyzed: networks derived from street connections in Switzerland, more regular fishing net like networks and randomly generated ones. Our networks have between 10 and 152 nodes and between 20 and 307 edges connecting the nodes. Depending on the network size between 5.000 and 90.000 agents with an average height of 170 cm (SD 6.5 cm) are initially released into the network. In each iteration new agents are regenerated based on the actual average body height of the previous iteration and, to a certain proportion, corrected by body heights in the neighboring nodes. After generating new agents, a certain number of them migrated into neighbor nodes, the model let preferentially taller agents migrate into network hubs. Migration is balanced by back migration of the same number of agents from nodes with high centrality measures to less connected nodes. The latter is random as well, but not biased by the agents height. Furthermore the distribution of agents per node and their correlation to the centrality of the nodes is varied in a systematic manner. After 100 iterations, the secular trend, i.e. the gain in body height for the different networks, is investigated in relation to the network properties. Results: We observe an increase of average agent body height after 100 iterations if height biased migration is enabled. The increase rate depends on the height of the neighboring factor, the population distribution, the relationship between population in the nodes and their centrality as well as on the network topology. Networks with uniform like distributions of the agents in the nodes, uncorrelated associations between node centrality and agent number per node, as well as very heterogeneous networks with very different node centralities lead to biggest gains in average body height. Conclusion: Our simulations show, that height biased migration into network hubs can possibly contribute to the secular trend of height increase in the human population. The strength of this "tall by migration" event depends on the actual properties of the underlying network. There is a possible significance of this mechanism for social networks, when hubs are represented by individuals and edges as their personal relationships. However, the required high number of iterations to achieve significant effects in more natural network structures in our models requires further studies to test the relevance and real effect sizes in real world scenarios.
Zwischen 1990 und 1994 wurden rund 1000 Liegenschaften, die in der ehemaligen DDR von der Sowjetarmee und der NVA für militärische Übungen genutzt wurden, an Bund und Länder übergeben. Die größten Truppenübungsplätze liegen in Brandenburg und sind heute teilweise in Großschutzgebiete integriert, andere Plätze werden von der Bundeswehr weiterhin aktiv genutzt. Aufgrund des militärischen Betriebs sind die Böden dieser Truppenübungsplätze oft durch Blindgänger, Munitionsreste, Treibstoff- und Schmierölreste bis hin zu chemischen Kampfstoffen belastet. Allerdings existieren auf fast allen Liegenschaften neben diesen durch Munition und militärische Übungen belasteten Bereichen auch naturschutzfachlich wertvolle Flächen; gerade in den Offenlandbereichen kann dies durchaus mit einer Belastung durch Kampfmittel einhergehen. Charakteristisch für diese offenen Flächen, zu denen u.a. Zwergstrauchheiden, Trockenrasen, wüstenähnliche Sandflächen und andere nährstoffarme baumlose Lebensräume gehören, sind Großflächigkeit, Abgeschiedenheit sowie ihre besondere Nutzung und Bewirtschaftung, d.h. die Abwesenheit von land- und forstwirtschaftlichem Betrieb sowie von Siedlungsflächen. Diese Charakteristik war die Grundlage für die Entwicklung einer speziell angepassten Flora und Fauna. Nach Beendigung des Militärbetriebs setzte dann in weiten Teilen eine großflächige Sukzession – die allmähliche Veränderung der Zusammensetzung von Pflanzen- und Tiergesellschaften – ein, die diese offenen Bereiche teilweise bereits in Wald verwandelte und somit verschwinden ließ. Dies wiederum führte zum Verlust der an diese Offenlandflächen gebundenen Tier- und Pflanzenarten. Zur Erhaltung, Gestaltung und Entwicklung dieser offenen Flächen wurden daher von einer interdisziplinären Gruppe von Naturwissenschaftlern verschiedene Methoden und Konzepte auf ihre jeweilige Wirksamkeit untersucht. So konnten schließlich die für die jeweiligen Standortbedingungen geeigneten Maßnahmen eingeleitet werden. Voraussetzung für die Einleitung der Maßnahmen sind zum einen Kenntnisse zu diesen jeweiligen Standortbedingungen, d.h. zum Ist-Zustand, sowie zur Entwicklung der Flächen, d.h. zur Dynamik. So kann eine Abschätzung über die zukünftige Flächenentwicklung getroffen werden, damit ein effizienter Maßnahmeneinsatz stattfinden kann. Geoinformationssysteme (GIS) spielen dabei eine entscheidende Rolle zur digitalen Dokumentation der Biotop- und Nutzungstypen, da sie die Möglichkeit bieten, raum- und zeitbezogene Geometrie- und Sachdaten in großen Mengen zu verarbeiten. Daher wurde ein fachspezifisches GIS für Truppenübungsplätze entwickelt und implementiert. Die Aufgaben umfassten die Konzeption der Datenbank und des Objektmodells sowie fachspezifischer Modellierungs-, Analyse- und Präsentationsfunktionen. Für die Integration von Fachdaten in die GIS-Datenbank wurde zudem ein Metadatenkatalog entwickelt, der in Form eines zusätzlichen GIS-Tools verfügbar ist. Die Basisdaten für das GIS wurden aus Fernerkundungsdaten, topographischen Karten sowie Geländekartierungen gewonnen. Als Instrument für die Abschätzung der zukünftigen Entwicklung wurde das Simulationstool AST4D entwickelt, in dem sowohl die Nutzung der (Raster-)Daten des GIS als Ausgangsdaten für die Simulationen als auch die Nutzung der Simulationsergebnisse im GIS möglich ist. Zudem können die Daten in AST4D raumbezogen visualisiert werden. Das mathematische Konstrukt für das Tool war ein so genannter Zellulärer Automat, mit dem die Flächenentwicklung unter verschiedenen Voraussetzungen simuliert werden kann. So war die Bildung verschiedener Szenarien möglich, d.h. die Simulation der Flächenentwicklung mit verschiedenen (bekannten) Eingangsparametern und den daraus resultierenden unterschiedlichen (unbekannten) Endzuständen. Vor der Durchführung einer der drei in AST4D möglichen Simulationsstufen können angepasst an das jeweilige Untersuchungsgebiet benutzerspezifische Festlegungen getroffen werden.
Modeling a secular trend by Monte Carlo simulation of height biased migration in a spatial network
(2017)
Background: In a recent Monte Carlo simulation, the clustering of body height of Swiss military conscripts within a spatial network with characteristic features of the natural Swiss geography was investigated. In this study I examined the effect of migration of tall individuals into network hubs on the dynamics of body height within the whole spatial network. The aim of this study was to simulate height trends. Material and methods: Three networks were used for modeling, a regular rectangular fishing net like network, a real world example based on the geographic map of Switzerland, and a random network. All networks contained between 144 and 148 districts and between 265-307 road connections. Around 100,000 agents were initially released with average height of 170 cm, and height standard deviation of 6.5 cm. The simulation was started with the a priori assumption that height variation within a district is limited and also depends on height of neighboring districts (community effect on height). In addition to a neighborhood influence factor, which simulates a community effect, body height dependent migration of conscripts between adjacent districts in each Monte Carlo simulation was used to re-calculate next generation body heights. In order to determine the direction of migration for taller individuals, various centrality measures for the evaluation of district importance within the spatial network were applied. Taller individuals were favored to migrate more into network hubs, backward migration using the same number of individuals was random, not biased towards body height. Network hubs were defined by the importance of a district within the spatial network. The importance of a district was evaluated by various centrality measures. In the null model there were no road connections, height information could not be delivered between the districts. Results: Due to the favored migration of tall individuals into network hubs, average body height of the hubs, and later, of the whole network increased by up to 0.1 cm per iteration depending on the network model. The general increase in height within the network depended on connectedness and on the amount of height information that was exchanged between neighboring districts. If higher amounts of neighborhood height information were exchanged, the general increase in height within the network was large (strong secular trend). The trend in the homogeneous fishnet like network was lowest, the trend in the random network was highest. Yet, some network properties, such as the heteroscedasticity and autocorrelations of the migration simulation models differed greatly from the natural features observed in Swiss military conscript networks. Autocorrelations of district heights for instance, were much higher in the migration models. Conclusion: This study confirmed that secular height trends can be modeled by preferred migration of tall individuals into network hubs. However, basic network properties of the migration simulation models differed greatly from the natural features observed in Swiss military conscripts. Similar network-based data from other countries should be explored to better investigate height trends with Monte Carlo migration approach.