Refine
Year of publication
Document Type
- Article (19)
- Doctoral Thesis (9)
- Monograph/Edited Volume (3)
- Postprint (2)
- Part of a Book (1)
- Master's Thesis (1)
- Other (1)
Keywords
- modeling (36) (remove)
Institute
- Institut für Geowissenschaften (8)
- Institut für Umweltwissenschaften und Geographie (7)
- Institut für Biochemie und Biologie (5)
- Institut für Physik und Astronomie (4)
- Department Psychologie (2)
- Fachgruppe Betriebswirtschaftslehre (2)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Chemie (2)
- Institut für Mathematik (2)
- Institut für Informatik und Computational Science (1)
- Mathematisch-Naturwissenschaftliche Fakultät (1)
We used structural topic modeling to analyze over 800,000 German tweets about COVID-19 to answer the questions: What patterns emerge in tweets as a response to a health crisis? And how do topics discussed change over time? The study leans on the goals associated with the health information seeking (GAINS) model, discerning whether a post aims at tackling and eliminating the problem (i.e., problem-focused) or managing the emotions (i.e., emotion-focused); whether it strives to maximize positive outcomes (promotion focus) or to minimize negative outcomes (prevention focus). The findings indicate four clusters salient in public reactions: 1) “Understanding” (problem-promotion); 2) “Action planning” (problem-prevention); 3) “Hope” (emotion-promotion) and 4) “Reassurance” (emotion-prevention). Public communication is volatile over time, and a shift is evidenced from self-centered to community-centered topics within 4.5 weeks. Our study illustrates social media text mining's potential to quickly and efficiently extract public opinions and reactions. Monitoring fears and trending topics enable policymakers to rapidly respond to deviant behavior, like resistive attitudes toward containment measures or deteriorating physical health. Healthcare workers can use the insights to provide mental health services for battling anxiety or extensive loneliness from staying home.
A suitable vehicle for integration of bioactive plant constituents is proposed. It involves modification of proteins using phenolics and applying these for protection of labile constituents. It dissects the noncovalent and covalent interactions of beta-lactoglobulin with coffee-specific phenolics. Alkaline and polyphenol oxidase modulated covalent reactions were compared. Tryptic digestion combined with MALDI-TOF-MS provided tentative allocation of the modification type and site in the protein, and an in silico modeling of modified beta-lactoglobulin is proposed. The modification delivers proteins with enhanced antioxidative properties. Changed structural properties and differences in solubility, surface hydrophobicity, and emulsification were observed. The polyphenol oxidase modulated reaction provides a modified beta-lactoglobulin with a high antioxidative power, is thermally more stable, requires less energy to unfold, and, when emulsified with lutein esters, exhibits their higher stability against UV light. Thus, adaptation of this modification provides an innovative approach for functionalizing proteins and their uses in the food industry.
This study addresses the interactions of coffee storage proteins with coffee-specific phenolic compounds. Protein profiles, of Coffea arabica and Coffea canephora (var robusta) were compared. Major Phenolic compounds were extracted and analyzed with appropriate methods. The polyphenol-protein interactions during protein extraction have been addressed by different analytical setups [reversed-phase high-performance liquid chromatography (RP-HPLC), sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), matrix-assisted laser desorption ionization-time of flight-mass spectrometry (MALDI-TOF-MS), and Trolox equivalent antioxidant capacity (TEAC) assays], with focus directed toward identification of covalent adduct formation. The results indicate that C. arabica proteins are more susceptible to these interactions and the polyphenol oxidase activity seems to be a crucial factor for the formation of these addition products. A tentative allocation of the modification type and site in the protein has been attempted. Thus, the first available in silico modeling of modified coffee proteins is reported. The extent of these modifications may contribute to the structure and function of "coffee melanoidins" and are discussed in the context of coffee flavor formation.
To gain a deeper understanding of the mechanisms behind biomass accumulation, it is important to study plant growth behavior. Manually phenotyping large sets of plants requires important human resources and expertise and is typically not feasible for detection of weak growth phenotypes. Here, we established an automated growth phenotyping pipeline for Arabidopsis thaliana to aid researchers in comparing growth behaviors of different genotypes.
The analysis pipeline includes automated image analysis of two-dimensional digital plant images and evaluation of manually annotated information of growth stages. It employs linear mixed-effects models to quantify genotype effects on total rosette area and relative leaf growth rate (RLGR) and ANOVAs to quantify effects on developmental times.
Using the system, a single researcher can phenotype up to 7000 plants d(-1). Technical variance is very low (typically < 2%). We show quantitative results for the growth-impaired starch-excessmutant sex4-3 and the growth-enhancedmutant grf9.
We show that recordings of environmental and developmental variables reduce noise levels in the phenotyping datasets significantly and that careful examination of predictor variables (such as d after sowing or germination) is crucial to avoid exaggerations of recorded phenotypes and thus biased conclusions.
For around a decade, deep learning - the sub-field of machine learning that refers to artificial neural networks comprised of many computational layers - modifies the landscape of statistical model development in many research areas, such as image classification, machine translation, and speech recognition. Geoscientific disciplines in general and the field of hydrology in particular, also do not stand aside from this movement. Recently, the proliferation of modern deep learning-based techniques and methods has been actively gaining popularity for solving a wide range of hydrological problems: modeling and forecasting of river runoff, hydrological model parameters regionalization, assessment of available water resources. identification of the main drivers of the recent change in water balance components. This growing popularity of deep neural networks is primarily due to their high universality and efficiency. The presented qualities, together with the rapidly growing amount of accumulated environmental information, as well as increasing availability of computing facilities and resources, allow us to speak about deep neural networks as a new generation of mathematical models designed to, if not to replace existing solutions, but significantly enrich the field of geophysical processes modeling. This paper provides a brief overview of the current state of the field of development and application of deep neural networks in hydrology. Also in the following study, the qualitative long-term forecast regarding the development of deep learning technology for managing the corresponding hydrological modeling challenges is provided based on the use of "Gartner Hype Curve", which in the general details describes a life cycle of modern technologies.
During the last few decades, the rapid separation of the Small Aral Sea from the isolated basin has changed its hydrological and ecological conditions tremendously. In the present study, we developed and validated the hybrid model for the Syr Darya River basin based on a combination of state-of-the-art hydrological and machine learning models. Climate change impact on freshwater inflow into the Small Aral Sea for the projection period 2007–2099 has been quantified based on the developed hybrid model and bias corrected and downscaled meteorological projections simulated by four General Circulation Models (GCM) for each of three Representative Concentration Pathway scenarios (RCP). The developed hybrid model reliably simulates freshwater inflow for the historical period with a Nash–Sutcliffe efficiency of 0.72 and a Kling–Gupta efficiency of 0.77. Results of the climate change impact assessment showed that the freshwater inflow projections produced by different GCMs are misleading by providing contradictory results for the projection period. However, we identified that the relative runoff changes are expected to be more pronounced in the case of more aggressive RCP scenarios. The simulated projections of freshwater inflow provide a basis for further assessment of climate change impacts on hydrological and ecological conditions of the Small Aral Sea in the 21st Century.
During the last few decades, the rapid separation of the Small Aral Sea from the isolated basin has changed its hydrological and ecological conditions tremendously. In the present study, we developed and validated the hybrid model for the Syr Darya River basin based on a combination of state-of-the-art hydrological and machine learning models. Climate change impact on freshwater inflow into the Small Aral Sea for the projection period 2007-2099 has been quantified based on the developed hybrid model and bias corrected and downscaled meteorological projections simulated by four General Circulation Models (GCM) for each of three Representative Concentration Pathway scenarios (RCP). The developed hybrid model reliably simulates freshwater inflow for the historical period with a Nash-Sutcliffe efficiency of 0.72 and a Kling-Gupta efficiency of 0.77. Results of the climate change impact assessment showed that the freshwater inflow projections produced by different GCMs are misleading by providing contradictory results for the projection period. However, we identified that the relative runoff changes are expected to be more pronounced in the case of more aggressive RCP scenarios. The simulated projections of freshwater inflow provide a basis for further assessment of climate change impacts on hydrological and ecological conditions of the Small Aral Sea in the 21st Century.
Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
In this work, an approach of paleoclimate reconstruction for tropical East Africa is presented. After giving a short summary of modern climate conditions in the tropics and the East African climate peculiarity, the potential of reconstructing climate from paleolake sediments is discussed. As demonstrated, the hydrologic sensitivity of high-elevated closed-basin lakes in the Central Kenya Rift yields valuable guaranties for the establishment of long-term climate records. Temporal fluctuations of the limnological characteristics saved in the lake sediments are used to define variations in the Quaternary climate history. Based on diatom analyses in radiocarbon- and 40Ar/39Ar-dated sediments, a chronology of paleoecologic fluctuations is developed for the Central Kenya Rift -lakes Nakuru, Elmenteita and Naivasha. At least during the penultimate interglacial (around 140 to 60 kyr BP) and during the last interglacial (around 12 to 4 kyr BP), these lakes experienced several transgression-regression cycles on time intervals of about 11,000 years. Additionally, a long-term trend of lake evolution is found suggesting the general succession from deep freshwater lakes towards more saline waters during the last million years. Using ecologic transfer functions and a simple lake-balance model, the observed paleohydrologic fluctuations are linked to potential precipitation-evaporation changes in the lake basins. Though also tectonic influences on the drainage pattern and the effect of varied seepage are investigated, it can be shown that already a small increase in precipitation of about 30±10 % may have affected the hydrologic budget of the intra-rift lakes within the reconstructed range. The findings of this study help to assess the natural climate variability of East Africa. They furthermore reflect the sensitivity of the Central Kenya Rift -lakes to fluctuations of large-scale climate parameters, such as solar radiation and sea-surface temperatures of the Indian Ocean.
One of the most striking features of ecological systems is their ability to undergo sudden outbreaks in the population numbers of one or a small number of species. The similarity of outbreak characteristics, which is exhibited in totally different and unrelated (ecological) systems naturally leads to the question whether there are universal mechanisms underlying outbreak dynamics in Ecology. It will be shown into two case studies (dynamics of phytoplankton blooms under variable nutrients supply and spread of epidemics in networks of cities) that one explanation for the regular recurrence of outbreaks stems from the interaction of the natural systems with periodical variations of their environment. Natural aquatic systems like lakes offer very good examples for the annual recurrence of outbreaks in Ecology. The idea whether chaos is responsible for the irregular heights of outbreaks is central in the domain of ecological modeling. This question is investigated in the context of phytoplankton blooms. The dynamics of epidemics in networks of cities is a problem which offers many ecological and theoretical aspects. The coupling between the cities is introduced through their sizes and gives rise to a weighted network which topology is generated from the distribution of the city sizes. We examine the dynamics in this network and classified the different possible regimes. It could be shown that a single epidemiological model can be reduced to a one-dimensional map. We analyze in this context the dynamics in networks of weighted maps. The coupling is a saturation function which possess a parameter which can be interpreted as an effective temperature for the network. This parameter allows to vary continously the network topology from global coupling to hierarchical network. We perform bifurcation analysis of the global dynamics and succeed to construct an effective theory explaining very well the behavior of the system.
In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data.
Modelers can improve a model by addressing the causes for the model errors (data errors and structural errors). This leads to implementing model enhancements (MEs), for example, meteorological data based on more monitoring stations, improved calibration data, and/or modifications in process formulations. However, deciding on which MEs to implement remains a matter of expert knowledge. After implementing multiple MEs, any improvement in model performance is not easily attributed, especially when considering different objectives or aspects of this improvement (e.g., better dynamics vs. reduced bias). We present an approach for comparing the effect of multiple MEs based on real observations and considering multiple objectives (MMEMO). A stepwise selection approach and structured plots help to address the multidimensionality of the problem. Tailored analyses allow a differentiated view on the effect of MEs and their interactions. MMEMO is applied to a case study employing the mesoscale hydro-sedimentological model WASA-SED for the Mediterranean-mountainous Isabena catchment, northeast Spain. The investigated seven MEs show diverse effects: some MEs (e.g., rainfall data) cause improvements for most objectives, while other MEs (e.g., land use data) only affect a few objectives or even decrease model performance. Interaction of MEs was observed for roughly half of the MEs, confirming the need to address them in the analysis. Calibration and increasing the temporal resolution showed by far stronger impact than any of the other MEs. The proposed framework can be adopted in other studies to analyze the effect of MEs and, thus, facilitate the identification and implementation of the most promising MEs for comparable cases.
Expanding modeling notations
(2022)
Creativity is a common aspect of business processes and thus needs a proper representation through process modeling notations. However, creative processes constitute highly flexible process elements, as new and unforeseeable outcome is developed. This presents a challenge for modeling languages. Current methods representing creative-intensive work are rather less able to capture creative specifics which are relevant to successfully run and manage these processes. We outline the concept of creative-intensive processes and present an example from a game design process in order to derive critical process aspects relevant for its modeling. Six aspects are detected, with first and foremost: process flexibility, as well as temporal uncertainty, experience, types of creative problems, phases of the creative process and individual criteria. By first analyzing what aspects of creative work modeling notations already cover, we further discuss which modeling extensions need to be developed to better represent creativity within business processes. We argue that a proper representation of creative work would not just improve the management of those processes, but can further enable process actors to more efficiently run these creative processes and adjust them to better fit to the creative needs.
The lithosphere is often assumed to reside in a thermal steady-state when quantitatively describing the temperature distribution in continental interiors and sedimentary basins, but also at active plate boundaries. Here, we investigate the applicability limit of this assumption at slowly deforming continental rifts. To this aim, we assess the tectonic thermal imprint in numerical experiments that cover a range of realistic rift configurations. For each model scenario, the deviation from thermal equilibrium is evaluated. This is done by comparing the transient temperature field of every model to a corresponding steady-state model with an identical structural configuration. We find that the validity of the thermal steady-state assumption strongly depends on rift type, divergence velocity, sampling location, and depth within the rift. Maximum differences between transient and steady-state models occur in narrow rifts, at the rift sides, and if the extension rate exceeds 0.5-2 mm/a. Wide rifts, however, reside close to thermal steady-state even for high extension velocities. The transient imprint of rifting appears to be overall negligible for shallow isotherms with a temperature less than 100 degrees C. Contrarily, a steady-state treatment of deep crustal isotherms leads to an underestimation of crustal temperatures, especially for narrow rift settings. Thus, not only relatively fast rifts like the Gulf of Corinth, Red Sea, and Main Ethiopian Rift, but even slow rifts like the Kenya Rift, Rhine Graben, and Rio Grande Rift must be expected to feature a pronounced transient component in the temperature field and to therefore violate the thermal steady-state assumption for deeper crustal isotherms.
Forage availability has been suggested as one driver of the observed decline in honey bees. However, little is known about the effects of its spatiotemporal variation on colony success. We present a modeling framework for assessing honey bee colony viability in cropping systems. Based on two real farmland structures, we developed a landscape generator to design cropping systems varying in crop species identity, diversity, and relative abundance. The landscape scenarios generated were evaluated using the existing honey bee colony model BEEHAVE, which links foraging to in-hive dynamics. We thereby explored how different cropping systems determine spatiotemporal forage availability and, in turn, honey bee colony viability (e.g., time to extinction, TTE) and resilience (indicated by, e.g., brood mortality). To assess overall colony viability, we developed metrics,P(H)andP(P,)which quantified how much nectar and pollen provided by a cropping system per year was converted into a colony's adult worker population. Both crop species identity and diversity determined the temporal continuity in nectar and pollen supply and thus colony viability. Overall farmland structure and relative crop abundance were less important, but details mattered. For monocultures and for four-crop species systems composed of cereals, oilseed rape, maize, and sunflower,P(H)andP(P)were below the viability threshold. Such cropping systems showed frequent, badly timed, and prolonged forage gaps leading to detrimental cascading effects on life stages and in-hive work force, which critically reduced colony resilience. Four-crop systems composed of rye-grass-dandelion pasture, trefoil-grass pasture, sunflower, and phacelia ensured continuous nectar and pollen supply resulting in TTE > 5 yr, andP(H)(269.5 kg) andP(P)(108 kg) being above viability thresholds for 5 yr. Overall, trefoil-grass pasture, oilseed rape, buckwheat, and phacelia improved the temporal continuity in forage supply and colony's viability. Our results are hypothetical as they are obtained from simplified landscape settings, but they nevertheless match empirical observations, in particular the viability threshold. Our framework can be used to assess the effects of cropping systems on honey bee viability and to develop land-use strategies that help maintain pollination services by avoiding prolonged and badly timed forage gaps.
Forage availability has been suggested as one driver of the observed decline in honey bees. However, little is known about the effects of its spatiotemporal variation on colony success. We present a modeling framework for assessing honey bee colony viability in cropping systems. Based on two real farmland structures, we developed a landscape generator to design cropping systems varying in crop species identity, diversity, and relative abundance. The landscape scenarios generated were evaluated using the existing honey bee colony model BEEHAVE, which links foraging to in-hive dynamics. We thereby explored how different cropping systems determine spatiotemporal forage availability and, in turn, honey bee colony viability (e.g., time to extinction, TTE) and resilience (indicated by, e.g., brood mortality). To assess overall colony viability, we developed metrics,P(H)andP(P,)which quantified how much nectar and pollen provided by a cropping system per year was converted into a colony's adult worker population. Both crop species identity and diversity determined the temporal continuity in nectar and pollen supply and thus colony viability. Overall farmland structure and relative crop abundance were less important, but details mattered. For monocultures and for four-crop species systems composed of cereals, oilseed rape, maize, and sunflower,P(H)andP(P)were below the viability threshold. Such cropping systems showed frequent, badly timed, and prolonged forage gaps leading to detrimental cascading effects on life stages and in-hive work force, which critically reduced colony resilience. Four-crop systems composed of rye-grass-dandelion pasture, trefoil-grass pasture, sunflower, and phacelia ensured continuous nectar and pollen supply resulting in TTE > 5 yr, andP(H)(269.5 kg) andP(P)(108 kg) being above viability thresholds for 5 yr. Overall, trefoil-grass pasture, oilseed rape, buckwheat, and phacelia improved the temporal continuity in forage supply and colony's viability. Our results are hypothetical as they are obtained from simplified landscape settings, but they nevertheless match empirical observations, in particular the viability threshold. Our framework can be used to assess the effects of cropping systems on honey bee viability and to develop land-use strategies that help maintain pollination services by avoiding prolonged and badly timed forage gaps.
Sustainable land use in Mountain Regions under global change synthesis across scales and disciplines
(2013)
Mountain regions provide essential ecosystem goods and services (EGS) for both mountain dwellers and people living outside these areas. Global change endangers the capacity of mountain ecosystems to provide key services. The Mountland project focused on three case study regions in the Swiss Alps and aimed to propose land-use practices and alternative policy solutions to ensure the provision of key EGS under climate and land-use changes. We summarized and synthesized the results of the project and provide insights into the ecological, socioeconomic, and political processes relevant for analyzing global change impacts on a European mountain region. In Mountland, an integrative approach was applied, combining methods from economics and the political and natural sciences to analyze ecosystem functioning from a holistic human-environment system perspective. In general, surveys, experiments, and model results revealed that climate and socioeconomic changes are likely to increase the vulnerability of the EGS analyzed. We regard the following key characteristics of coupled human-environment systems as central to our case study areas in mountain regions: thresholds, heterogeneity, trade-offs, and feedback. Our results suggest that the institutional framework should be strengthened in a way that better addresses these characteristics, allowing for (1) more integrative approaches, (2) a more network-oriented management and steering of political processes that integrate local stakeholders, and (3) enhanced capacity building to decrease the identified vulnerability as central elements in the policy process. Further, to maintain and support the future provision of EGS in mountain regions, policy making should also focus on project-oriented, cross-sectoral policies and spatial planning as a coordination instrument for land use in general.
Die Projektierung und Abwicklung sowie die statische und dynamische Analyse von Geschäftsprozessen im Bereich des Verwaltens und Regierens auf kommunaler, Länder- wie auch Bundesebene mit Hilfe von Informations- und Kommunikationstechniken beschäftigen Politiker und Strategen für Informationstechnologie ebenso wie die Öffentlichkeit seit Langem.
Der hieraus entstandene Begriff E-Government wurde in der Folge aus den unterschiedlichsten technischen, politischen und semantischen Blickrichtungen beleuchtet.
Die vorliegende Arbeit konzentriert sich dabei auf zwei Schwerpunktthemen:
• Das erste Schwerpunktthema behandelt den Entwurf eines hierarchischen Architekturmodells, für welches sieben hierarchische Schichten identifiziert werden können. Diese erscheinen notwendig, aber auch hinreichend, um den allgemeinen Fall zu beschreiben.
Den Hintergrund hierfür liefert die langjährige Prozess- und Verwaltungserfahrung als Leiter der EDV-Abteilung der Stadtverwaltung Landshut, eine kreisfreie Stadt mit rund 69.000 Einwohnern im Nordosten von München. Sie steht als Repräsentant für viele Verwaltungsvorgänge in der Bundesrepublik Deutschland und ist dennoch als Analyseobjekt in der Gesamtkomplexität und Prozessquantität überschaubar.
Somit können aus der Analyse sämtlicher Kernabläufe statische und dynamische Strukturen extrahiert und abstrakt modelliert werden.
Die Schwerpunkte liegen in der Darstellung der vorhandenen Bedienabläufe in einer Kommune. Die Transformation der Bedienanforderung in einem hierarchischen System, die Darstellung der Kontroll- und der Operationszustände in allen Schichten wie auch die Strategie der Fehlererkennung und Fehlerbehebung schaffen eine transparente Basis für umfassende Restrukturierungen und Optimierungen.
Für die Modellierung wurde FMC-eCS eingesetzt, eine am Hasso-Plattner-Institut für Softwaresystemtechnik GmbH (HPI) im Fachgebiet Kommunikationssysteme entwickelte Methodik zur Modellierung zustandsdiskreter Systeme unter Berücksichtigung möglicher Inkonsistenzen (Betreuer: Prof. Dr.-Ing. Werner Zorn [ZW07a, ZW07b]).
• Das zweite Schwerpunktthema widmet sich der quantitativen Modellierung und Optimierung von E-Government-Bediensystemen, welche am Beispiel des Bürgerbüros der Stadt Landshut im Zeitraum 2008 bis 2015 durchgeführt wurden. Dies erfolgt auf Basis einer kontinuierlichen Betriebsdatenerfassung mit aufwendiger Vorverarbeitung zur Extrahierung mathematisch beschreibbarer Wahrscheinlichkeitsverteilungen.
Der hieraus entwickelte Dienstplan wurde hinsichtlich der erzielbaren Optimierungen im dauerhaften Echteinsatz verifiziert.
[ZW07a] Zorn, Werner: «FMC-QE A New Approach in Quantitative Modeling», Vortrag anlässlich: MSV'07- The 2007 International Conference on Modeling, Simulation and Visualization Methods WorldComp2007, Las Vegas, 28.6.2007.
[ZW07b] Zorn, Werner: «FMC-QE, A New Approach in Quantitative Modeling», Veröffentlichung, Hasso-Plattner-Institut für Softwaresystemtechnik an der Universität Potsdam, 28.6.2007.
Im Zuge der Covid-19 Pandemie werden zwei Werte täglich diskutiert: Die zuletzt gemeldete Zahl der neu Infizierten und die sogenannte Reproduktionsrate. Sie gibt wieder, wie viele weitere Menschen ein an Corona erkranktes Individuum im Durchschnitt ansteckt. Für die Schätzung dieses Wertes gibt es viele Möglichkeiten - auch das Robert Koch-Institut gibt in seinem täglichen Situationsbericht stets zwei R-Werte an: Einen 4-Tage-R-Wert und einen weniger schwankenden 7-Tage-R-Wert. Diese Arbeit soll eine weitere Möglichkeit vorstellen, einige Aspekte der Pandemie zu modellieren und die Reproduktionsrate zu schätzen.
In der ersten Hälfte der Arbeit werden die mathematischen Grundlagen vorgestellt, die man für die Modellierung benötigt. Hierbei wird davon ausgegangen, dass der Leser bereits ein Basisverständnis von stochastischen Prozessen hat. Im Abschnitt Grundlagen werden Verzweigungsprozesse mit einigen Beispielen eingeführt und die Ergebnisse aus diesem Themengebiet, die für diese Arbeit wichtig sind, präsentiert. Dabei gehen wir zuerst auf einfache Verzweigungsprozesse ein und erweitern diese dann auf Verzweigungsprozesse mit mehreren Typen. Um die Notation zu erleichtern, beschränken wir uns auf zwei Typen. Das Prinzip lässt sich aber auf eine beliebige Anzahl von Typen erweitern.
Vor allem soll die Wichtigkeit des Parameters λ herausgestellt werden. Dieser Wert kann als durchschnittliche Zahl von Nachfahren eines Individuums interpretiert werden und bestimmt die Dynamik des Prozesses über einen längeren Zeitraum. In der Anwendung auf die Pandemie hat der Parameter λ die gleiche Rolle wie die Reproduktionsrate R.
In der zweiten Hälfte dieser Arbeit stellen wir eine Anwendung der Theorie über Multitype Verzweigungsprozesse vor. Professor Yanev und seine Mitarbeiter modellieren in ihrer Veröffentlichung Branching stochastic processes as models of Covid-19 epidemic development die Ausbreitung des Corona Virus' über einen Verzweigungsprozess mit zwei Typen. Wir werden dieses Modell diskutieren und Schätzer daraus ableiten: Ziel ist es, die Reproduktionsrate zu ermitteln. Außerdem analysieren wir die Möglichkeiten, die Dunkelziffer (die Zahl nicht gemeldeter Krankheitsfälle) zu schätzen. Wir wenden die Schätzer auf die Zahlen von Deutschland an und werten diese schließlich aus.