Refine
Document Type
- Article (5)
- Doctoral Thesis (4)
- Monograph/Edited Volume (2)
- Review (2)
- Conference Proceeding (1)
Language
- English (14)
Is part of the Bibliography
- yes (14) (remove)
Keywords
- Modeling (14) (remove)
Institute
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- Institut für Biochemie und Biologie (3)
- Institut für Geowissenschaften (3)
- Institut für Informatik und Computational Science (3)
- Extern (1)
- Institut für Chemie (1)
- Institut für Physik und Astronomie (1)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (1)
The degradation of polymers is described by mathematical models based on bond cleavage statistics including the decreasing probability of chain cuts with decreasing average chain length. We derive equations for the degradation of chains under a random chain cut and a chain end cut mechanism, which are compared to existing models. The results are used to predict the influence of internal molecular parameters. It is shown that both chain cut mechanisms lead to a similar shape of the mass or molecular mass loss curve. A characteristic time is derived, which can be used to extract the maximum length of soluble fragments l of the polymer. We show that the complete description is needed to extract the degradation rate constant k from the molecular mass loss curve and that l can be used to design polymers that lose less mechanical stability before entering the mass loss phase.
The business problem of having inefficient processes, imprecise process analyses, and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating, and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS), and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Expanding modeling notations
(2021)
Creativity is a common aspect of business processes and thus needs a proper representation through process modeling notations. However, creative processes constitute highly flexible process elements, as new and unforeseeable outcome is developed. This presents a challenge for modeling languages. Current methods representing creative-intensive work are rather less able to capture creative specifics which are relevant to successfully run and manage these processes. We outline the concept of creative-intensive processes and present an example from a game design process in order to derive critical process aspects relevant for its modeling. Six aspects are detected, with first and foremost: process flexibility, as well as temporal uncertainty, experience, types of creative problems, phases of the creative process and individual criteria. By first analyzing what aspects of creative work modeling notations already cover, we further discuss which modeling extensions need to be developed to better represent creativity within business processes. We argue that a proper representation of creative work would not just improve the management of those processes, but can further enable process actors to more efficiently run these creative processes and adjust them to better fit to the creative needs.
The plasmasphere is a dynamic region of cold, dense plasma surrounding the Earth. Its shape and size are highly susceptible to variations in solar and geomagnetic conditions. Having an accurate model of plasma density in the plasmasphere is important for GNSS navigation and for predicting hazardous effects of radiation in space on spacecraft. The distribution of cold plasma and its dynamic dependence on solar wind and geomagnetic conditions remain, however, poorly quantified. Existing empirical models of plasma density tend to be oversimplified as they are based on statistical averages over static parameters. Understanding the global dynamics of the plasmasphere using observations from space remains a challenge, as existing density measurements are sparse and limited to locations where satellites can provide in-situ observations. In this dissertation, we demonstrate how such sparse electron density measurements can be used to reconstruct the global electron density distribution in the plasmasphere and capture its dynamic dependence on solar wind and geomagnetic conditions.
First, we develop an automated algorithm to determine the electron density from in-situ measurements of the electric field on the Van Allen Probes spacecraft. In particular, we design a neural network to infer the upper hybrid resonance frequency from the dynamic spectrograms obtained with the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite, which is then used to calculate the electron number density. The developed Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm is applied to more than four years of EMFISIS measurements to produce the publicly available electron density data set.
We utilize the obtained electron density data set to develop a new global model of plasma density by employing a neural network-based modeling approach. In addition to the location, the model takes the time history of geomagnetic indices and location as inputs, and produces electron density in the equatorial plane as an output. It is extensively validated using in-situ density measurements from the Van Allen Probes mission, and also by comparing the predicted global evolution of the plasmasphere with the global IMAGE EUV images of He+ distribution. The model successfully reproduces erosion of the plasmasphere on the night side as well as plume formation and evolution, and agrees well with data.
The performance of neural networks strongly depends on the availability of training data, which is limited during intervals of high geomagnetic activity. In order to provide reliable density predictions during such intervals, we can employ physics-based modeling. We develop a new approach for optimally combining the neural network- and physics-based models of the plasmasphere by means of data assimilation. The developed approach utilizes advantages of both neural network- and physics-based modeling and produces reliable global plasma density reconstructions for quiet, disturbed, and extreme geomagnetic conditions.
Finally, we extend the developed machine learning-based tools and apply them to another important problem in the field of space weather, the prediction of the geomagnetic index Kp. The Kp index is one of the most widely used indicators for space weather alerts and serves as input to various models, such as for the thermosphere, the radiation belts and the plasmasphere. It is therefore crucial to predict the Kp index accurately. Previous work in this area has mostly employed artificial neural networks to nowcast and make short-term predictions of Kp, basing their inferences on the recent history of Kp and solar wind measurements at L1. We analyze how the performance of neural networks compares to other machine learning algorithms for nowcasting and forecasting Kp for up to 12 hours ahead. Additionally, we investigate several machine learning and information theory methods for selecting the optimal inputs to a predictive model of Kp. The developed tools for feature selection can also be applied to other problems in space physics in order to reduce the input dimensionality and identify the most important drivers.
Research outlined in this dissertation clearly demonstrates that machine learning tools can be used to develop empirical models from sparse data and also can be used to understand the underlying physical processes. Combining machine learning, physics-based modeling and data assimilation allows us to develop novel methods benefiting from these different approaches.
A challenge for eco-evolutionary research is to better understand the effect of climate and landscape changes on species and their distribution. Populations of species can respond to changes in their environment through local genetic adaptation or plasticity, dispersal, or local extinction. The individual-based modeling (IBM) approach has been repeatedly applied to assess organismic responses to environmental changes. IBMs simulate emerging adaptive behaviors from the basic entities upon which both ecological and evolutionary mechanisms act. The objective of this review is to summarize the state of the art of eco-evolutionary IBMs and to explore to what degree they already address the key responses of organisms to environmental change. In this, we identify promising approaches and potential knowledge gaps in the implementation of eco-evolutionary mechanisms to motivate future research. Using mainly the ISI Web of Science, we reveal that most of the progress in eco-evolutionary IBMs in the last decades was achieved for genetic adaptation to novel local environmental conditions. There is, however, not a single eco-evolutionary IBM addressing the three potential adaptive responses simultaneously. Additionally, IBMs implementing adaptive phenotypic plasticity are rare. Most commonly, plasticity was implemented as random noise or reaction norms. Our review further identifies a current lack of models where plasticity is an evolving trait. Future eco-evolutionary models should consider dispersal and plasticity as evolving traits with their associated costs and benefits. Such an integrated approach could help to identify conditions promoting population persistence depending on the life history strategy of organisms and the environment they experience.
The soft error rate (SER) due to heavy-ion irradiation of a clock tree is investigated in this paper. A method for clock tree SER prediction is developed, which employs a dedicated soft error analysis tool to characterize the single-event transient (SET) sensitivities of clock inverters and other commercial tools to calculate the SER through fault-injection simulations. A test circuit including a flip-flop chain and clock tree in a 65 nm CMOS technology is developed through the automatic ASIC design flow. This circuit is analyzed with the developed method to calculate its clock tree SER. In addition, this circuit is implemented in a 65 nm test chip and irradiated by heavy ions to measure its SER resulting from the SETs in the clock tree. The experimental and calculation results of this case study present good correlation, which verifies the effectiveness of the developed method.
Bridging metabolomics with plant phenotypic responses is challenging. Multivariate analyses account for the existing dependencies among metabolites, and regression models in particular capture such dependencies in search for association with a given trait. However, special care should be undertaken with metabolomics data. Here we propose a modeling workflow that considers all caveats imposed by such large data sets.
In near-surface geophysics, small portable loop-loop electro-magnetic induction (EMI) sensors using harmonic sources with a constant and rather small frequency are increasingly used to investigate the electrical properties of the subsurface. For such sensors, the influence of electrical conductivity and magnetic permeability on the EMI response is well-understood. Typically, data analysis focuses on reconstructing an electrical conductivity model by inverting the out-of-phase response. However, in a variety of near-surface applications, magnetic permeability (or susceptibility) models derived from the in-phase (IP) response may provide important additional information. In view of developing a fast 3D inversion procedure of the IP response for a dense grid of measurement points, we first analyze the 3D sensitivity functions associated with a homogeneous permeable half-space. Then, we compare synthetic data computed using a linear forward-modeling method based on these sensitivity functions with synthetic data computed using full nonlinear forward-modeling methods. The results indicate the correctness and applicability of our linear forward-modeling approach. Furthermore, we determine the advantages of converting IP data into apparent permeability, which, for example, allows us to extend the applicability of the linear forward-modeling method to high-magnetic environments. Finally, we compute synthetic data with the linear theory for a model consisting of a controlled magnetic target and compare the results with field data collected with a four-configuration loop-loop EMI sensor. With this field-scale experiment, we determine that our linear forward-modeling approach can reproduce measured data with sufficiently small error, and, thus, it represents the basis for developing efficient inversion approaches.
TURBO2 - a MATLAB simulation to study the effects of bioturbation on paleoceanographic time series
(2013)
Bioturbation (or benthic mixing) causes significant distortions in marine stable isotope signals and other palaeoceanographic records. Although the influence of bioturbation on these records is well known it has rarely been dealt systematically. The MATLAB program called TURBO2 can be used to simulate the effect of bioturbation on individual sediment particles. It can therefore be used to model the distortion of all physical, chemical, and biological signals in deep-sea sediments, such as Mg/Ca ratios and UK37-based sea-surface temperature (SST) variations. In particular, it can be used to study the distortions in paleoceanographic records that are based on individual sediment particles, such as SST records based on foraminifera assemblages. Furthermore. TURBO2 provides a tool to study the effect of benthic mixing of isotope signals such as C-14, delta O-18, and delta C-13, measured in a stratigraphic carrier such as foraminifera shells.
The service-oriented architecture supports the dynamic assembly and runtime reconfiguration of complex open IT landscapes by means of runtime binding of service contracts, launching of new components and termination of outdated ones. Furthermore, the evolution of these IT landscapes is not restricted to exchanging components with other ones using the same service contracts, as new services contracts can be added as well. However, current approaches for modeling and verification of service-oriented architectures do not support these important capabilities to their full extend.In this report we present an extension of the current OMG proposal for service modeling with UML - SoaML - which overcomes these limitations. It permits modeling services and their service contracts at different levels of abstraction, provides a formal semantics for all modeling concepts, and enables verifying critical properties. Our compositional and incremental verification approach allows for complex properties including communication parameters and time and covers besides the dynamic binding of service contracts and the replacement of components also the evolution of the systems by means of new service contracts. The modeling as well as verification capabilities of the presented approach are demonstrated by means of a supply chain example and the verification results of a first prototype are shown.