Refine
Has Fulltext
- yes (50) (remove)
Year of publication
Document Type
- Habilitation Thesis (50) (remove)
Keywords
- Biophysik (3)
- biophysics (3)
- Datenanalyse (2)
- Oberfläche (2)
- Resonanzenergietransfer (2)
- Selbstorganisation (2)
- Stochastische Prozesse (2)
- Zelladhäsion (2)
- cell adhesion (2)
- data analysis (2)
Institute
- Institut für Physik und Astronomie (21)
- Institut für Chemie (10)
- Institut für Biochemie und Biologie (7)
- Institut für Geowissenschaften (6)
- Extern (2)
- Institut für Umweltwissenschaften und Geographie (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Informatik und Computational Science (1)
- Wirtschaftswissenschaften (1)
Proteins are chain molecules built from amino acids. The precise sequence of the 20 different types of amino acids in a protein chain defines into which structure a protein folds, and the three-dimensional structure in turn specifies the biological function of the protein. The reliable folding of proteins is a prerequisite for their robust function. Misfolding can lead to protein aggregates that cause severe diseases, such as Alzheimer's, Parkinson's, or the variant Creutzfeldt-Jakob disease. Small single-domain proteins often fold without experimentally detectable metastable intermediate states. The folding dynamics of these proteins is thought to be governed by a single transition-state barrier between the unfolded and the folded state. The transition state is highly instable and cannot be observed directly. However, mutations in which a single amino acid of the protein is substituted by another one can provide indirect access. The mutations slightly change the transition-state barrier and, thus, the folding and unfolding times of the protein. The central question is how to reconstruct the transition state from the observed changes in folding times. In this habilitation thesis, a novel method to extract structural information on transition states from mutational data is presented. The method is based on (i) the cooperativity of structural elements such as alpha-helices and beta-hairpins, and (ii) on splitting up mutation-induced free-energy changes into components for these elements. By fitting few parameters, the method reveals the degree of structure formation of alpha-helices and beta-hairpins in the transition state. In addition, it is shown in this thesis that the folding routes of small single-domain proteins are dominated by loop-closure dependencies between the structural elements.
Quantitative thermodynamic and geochemical modeling is today applied in a variety of geological environments from the petrogenesis of igneous rocks to the oceanic realm. Thermodynamic calculations are used, for example, to get better insight into lithosphere dynamics, to constrain melting processes in crust and mantle as well as to study fluid-rock interaction. The development of thermodynamic databases and computer programs to calculate equilibrium phase diagrams have greatly advanced our ability to model geodynamic processes from subduction to orogenesis. However, a well-known problem is that despite its broad application the use and interpretation of thermodynamic models applied to natural rocks is far from straightforward. For example, chemical disequilibrium and/or unknown rock properties, such as fluid activities, complicate the application of equilibrium thermodynamics.
One major aspect of the publications presented in this Habilitationsschrift are new approaches to unravel dynamic and chemical histories of rocks that include applications to chemically open system behaviour. This approach is especially important in rocks that are affected by element fractionation due to fractional crystallisation and fluid loss during dehydration reactions. Furthermore, chemically open system behaviour has also to be considered for studying fluid-rock interaction processes and for extracting information from compositionally zoned metamorphic minerals. In this Habilitationsschrift several publications are presented where I incorporate such open system behaviour in the forward models by incrementing the calculations and considering changing reacting rock compositions during metamorphism. I apply thermodynamic forward modelling incorporating the effects of element fractionation in a variety of geodynamic and geochemical applications in order to better understand lithosphere dynamics and mass transfer in solid rocks.
In three of the presented publications I combine thermodynamic forward models with trace element calculations in order to enlarge the application of geochemical numerical forward modeling. In these publications a combination of thermodynamic and trace element forward modeling is used to study and quantify processes in metamorphic petrology at spatial scales from µm to km. In the thermodynamic forward models I utilize Gibbs energy minimization to quantify mineralogical changes along a reaction path of a chemically open fluid/rock system. These results are combined with mass balanced trace element calculations to determine the trace element distribution between rock and melt/fluid during the metamorphic evolution. Thus, effects of mineral reactions, fluid-rock interaction and element transport in metamorphic rocks on the trace element and isotopic composition of minerals, rocks and percolating fluids or melts can be predicted.
One of the included publications shows that trace element growth zonations in metamorphic garnet porphyroblasts can be used to get crucial information about the reaction path of the investigated sample. In order to interpret the major and trace element distribution and zoning patterns in terms of the reaction history of the samples, we combined thermodynamic forward models with mass-balance rare earth element calculations. Such combined thermodynamic and mass-balance calculations of the rare earth element distribution among the modelled stable phases yielded characteristic zonation patterns in garnet that closely resemble those in the natural samples. We can show in that paper that garnet growth and trace element incorporation occurred in near thermodynamic equilibrium with matrix phases during subduction and that the rare earth element patterns in garnet exhibit distinct enrichment zones that fingerprint the minerals involved in the garnet-forming reactions.
In two of the presented publications I illustrate the capacities of combined thermodynamic-geochemical modeling based on examples relevant to mass transfer in subduction zones. The first example focuses on fluid-rock interaction in and around a blueschist-facies shear zone in felsic gneisses, where fluid-induced mineral reactions and their effects on boron (B) concentrations and isotopic compositions in white mica are modeled. In the second example, fluid release from a subducted slab and associated transport of B and variations in B concentrations and isotopic compositions in liberated fluids and residual rocks are modeled. I show that, combined with experimental data on elemental partitioning and isotopic fractionation, thermodynamic forward modeling unfolds enormous capacities that are far from exhausted.
In my publications presented in this Habilitationsschrift I compare the modeled results to geochemical data of natural minerals and rocks and demonstrate that the combination of thermodynamic and geochemical models enables quantification of metamorphic processes and insights into element cycling that would have been unattainable so far.
Thus, the contributions to the science community presented in this Habilitatonsschrift concern the fields of petrology, geochemistry, geochronology but also ore geology that all use thermodynamic and geochemical models to solve various problems related to geo-materials.
Die klassische Physik/Chemie unterscheidet zwischen drei Bindungstypen: Der kovalenten Bindung, der ionischen Bindung und der metallischen Bindung. Moleküle untereinander werden hingegen durch schwache Wechselwirkungen zusammen gehalten, sie sind trotz ihrer schwachen Kräfte weniger verstanden, aber dabei nicht weniger wichtig. In zukunftsweisenden Gebieten wie der Nanotechnologie, der Supramolekularen Chemie und Biochemie sind sie von elementarer Bedeutung.
Um schwache, intermolekulare Wechselwirkungen zu beschreiben, vorauszusagen und zu verstehen, sind sie zunächst theoretisch zu erfassen. Hierzu gehören verschiedene quantenchemische Methoden, die in dieser Arbeit vorgestellt, verglichen, weiterentwickelt und schließlich auch exemplarisch auf Problemstellungen in der Chemie angewendet werden. Aufbauend auf einer Hierarchie von Methoden unterschiedlicher Genauigkeit werden sie für diese Ziele eingesetzt, ausgearbeitet und kombiniert.
Berechnet wird die Elektronenstruktur, also die Verteilung und Energie von Elektronen, die im Wesentlichen die Atome zusammen halten. Da Ungenauigkeiten von der Beschreibung der Elektronenstruktur von den verwendeten Methoden abhängen, kann man die Effekte detailliert untersuchen, sie beschreiben und darauf aufbauend weiter entwickeln, um sie anschließend an verschiedenen Modellen zu testen. Die Geschwindigkeit der Berechnungen mit modernen Computern ist eine wesentliche, zu berücksichtigende Komponente, da im Allgemeinen die Genauigkeit mit der Rechenzeit exponentiell steigt, und die damit an die Grenzen der Möglichkeiten stoßen muss.
Die genaueste der verwendeten Methoden basiert auf der Coupled-Cluster-Theorie, die sehr gute Voraussagen ermöglicht. Für diese wird eine sogenannte spektroskopische Genauigkeit mit Abweichungen von wenigen Wellenzahlen erzielt, was Vergleiche mit experimentellen Daten zeigen. Eine Möglichkeit zur Näherung von hochgenauen Methoden basiert auf der Dichtefunktionaltheorie: Hier wurde das „Boese-Martin for Kinetics“ (BMK)-Funktional entwickelt, dessen Funktionalform sich in vielen nach 2010 veröffentlichten Dichtefunktionalen wiederfindet.
Mit Hilfe der genaueren Methoden lassen sich schließlich semiempirische Kraftfelder zur Beschreibung intermolekularer Wechselwirkungen für individuelle Systeme parametrisieren, diese benötigen weit weniger Rechenzeit als die Methoden, die auf der genauen Berechnung der Elektronenstruktur von Molekülen beruhen.
Für größere Systeme lassen sich auch verschiedene Methoden kombinieren. Dabei wurden Einbettungsverfahren verfeinert und mit neuen methodischen Ansätzen vorgeschlagen. Sie verwenden sowohl die symmetrieadaptierte Störungstheorie als auch die quantenchemische Einbettung von Fragmenten in größere, quantenchemisch berechnete Systeme.
Die Entwicklungen neuer Methoden beziehen ihren Wert im Wesentlichen durch deren Anwendung:
In dieser Arbeit standen zunächst die Wasserstoffbrücken im Vordergrund. Sie zählen zu den stärkeren intermolekularen Wechselwirkungen und sind nach wie vor eine Herausforderung. Im Gegensatz dazu sind van-der-Waals Wechselwirkungen relativ einfach durch Kraftfelder zu beschreiben. Deshalb sind viele der heute verwendeten Methoden für Systeme, in denen Wasserstoffbrücken dominieren, vergleichsweise schlecht.
Eine Untersuchung molekularer Aggregate mit Auswirkungen intermolekularer Wechselwirkungen auf die Schwingungsfrequenzen von Molekülen schließt sich an. Dabei wird auch über die sogenannte starrer-Rotor-harmonischer-Oszillator-Näherung hinausgegangen.
Eine weitreichende Anwendung behandelt Adsorbate, hier die von Molekülen auf ionischen/metallischen Oberflächen. Sie können mit ähnlichen Methoden behandelt werden wie die intermolekularen Wechselwirkungen, und sind mit speziellen Einbettungsverfahren sehr genau zu beschreiben. Die Resultate dieser theoretischen Berechnungen stimulierten eine Neubewertung der bislang bekannten experimentellen Ergebnisse.
Molekulare Kristalle sind ein äußerst wichtiges Forschungsgebiet. Sie werden durch schwache Wechselwirkungen zusammengehalten, die von van-der-Waals Kräften bis zu Wasserstoffbrücken reichen. Auch hier wurden neuentwickelte Methoden eingesetzt, die eine interessante, mindestens ebenso genaue Alternative zu den derzeit gängigen Methoden darstellen.
Von daher sind die entwickelten Methoden, als auch deren Anwendung äußerst vielfältig. Die behandelten Berechnungen der Elektronenstruktur erstrecken sich von den sogenannten post-Hartree-Fock-Methoden über den Einsatz der Dichtefunktionaltheorie bis zu semiempirischen Kraftfeldern und deren Kombinationen. Die Anwendung reicht von einzelnen Molekülen in der Gasphase über die Adsorption auf Oberflächen bis zum molekularen Festkörper.
Understanding the formation of stars in galaxies is central to much of modern astrophysics. For several decades it has been thought that the star formation process is primarily controlled by the interplay between gravity and magnetostatic support, modulated by neutral-ion drift. Recently, however, both observational and numerical work has begun to suggest that supersonic interstellar turbulence rather than magnetic fields controls star formation. This review begins with a historical overview of the successes and problems of both the classical dynamical theory of star formation, and the standard theory of magnetostatic support from both observational and theoretical perspectives. We then present the outline of a new paradigm of star formation based on the interplay between supersonic turbulence and self-gravity. Supersonic turbulence can provide support against gravitational collapse on global scales, while at the same time it produces localized density enhancements that allow for collapse on small scales. The efficiency and timescale of stellar birth in Galactic gas clouds strongly depend on the properties of the interstellar turbulent velocity field, with slow, inefficient, isolated star formation being a hallmark of turbulent support, and fast, efficient, clustered star formation occurring in its absence. After discussing in detail various theoretical aspects of supersonic turbulence in compressible self-gravitating gaseous media relevant for star forming interstellar clouds, we explore the consequences of the new theory for both local star formation and galactic scale star formation. The theory predicts that individual star-forming cores are likely not quasi-static objects, but dynamically evolving. Accretion onto these objects will vary with time and depend on the properties of the surrounding turbulent flow. This has important consequences for the resulting stellar mass function. Star formation on scales of galaxies as a whole is expected to be controlled by the balance between gravity and turbulence, just like star formation on scales of individual interstellar gas clouds, but may be modulated by additional effects like cooling and differential rotation. The dominant mechanism for driving interstellar turbulence in star-forming regions of galactic disks appears to be supernovae explosions. In the outer disk of our Milky Way or in low-surface brightness galaxies the coupling of rotation to the gas through magnetic fields or gravity may become important.
Biological materials, in addition to having remarkable physical properties, can also change shape and volume. These shape and volume changes allow organisms to form new tissue during growth and morphogenesis, as well as to repair and remodel old tissues. In addition shape or volume changes in an existing tissue can lead to useful motion or force generation (actuation) that may even still function in the dead organism, such as in the well known example of the hygroscopic opening or closing behaviour of the pine cone. Both growth and actuation of tissues are mediated, in addition to biochemical factors, by the physical constraints of the surrounding environment and the architecture of the underlying tissue. This habilitation thesis describes biophysical studies carried out over the past years on growth and swelling mediated shape changes in biological systems. These studies use a combination of theoretical and experimental tools to attempt to elucidate the physical mechanisms governing geometry controlled tissue growth and geometry constrained tissue swelling. It is hoped that in addition to helping understand fundamental processes of growth and morphogenesis, ideas stemming from such studies can also be used to design new materials for medicine and robotics.
In this thesis, a collection of studies is presented that advance research on complex food webs in several directions. Food webs, as the networks of predator-prey interactions in ecosystems, are responsible for distributing the resources every organism needs to stay alive. They are thus central to our understanding of the mechanisms that support biodiversity, which in the face of increasing severity of anthropogenic global change and accelerated species loss is of highest importance, not least for our own well-being.
The studies in the first part of the thesis are concerned with general mechanisms that determine the structure and stability of food webs. It is shown how the allometric scaling of metabolic rates with the species' body masses supports their persistence in size-structured food webs (where predators are larger than their prey), and how this interacts with the adaptive adjustment of foraging efforts by consumer species to create stable food webs with a large number of coexisting species. The importance of the master trait body mass for structuring communities is further exemplified by demonstrating that the specific way the body masses of species engaging in empirically documented predator-prey interactions affect the predator's feeding rate dampens population oscillations, thereby helping both species to survive. In the first part of the thesis it is also shown that in order to understand certain phenomena of population dynamics, it may be necessary to not only take the interactions of a focal species with other species into account, but to also consider the internal structure of the population. This can refer for example to different abundances of age cohorts or developmental stages, or the way individuals of different age or stage interact with other species.
Building on these general insights, the second part of the thesis is devoted to exploring the consequences of anthropogenic global change on the persistence of species. It is first shown that warming decreases diversity in size-structured food webs. This is due to starvation of large predators on higher trophic levels, which suffer from a mismatch between their respiration and ingestion rates when temperature increases. In host-parasitoid networks, which are not size-structured, warming does not have these negative effects, but eutrophication destabilises the systems by inducing detrimental population oscillations. In further studies, the effect of habitat change is addressed. On the level of individual patches, increasing isolation of habitat patches has a similar effect as warming, as it leads to decreasing diversity due to the extinction of predators on higher trophic levels. In this case it is caused by dispersal mortality of smaller and therefore less mobile species on lower trophic levels, meaning that an increasing fraction of their biomass production is lost to the inhospitable matrix surrounding the habitat patches as they become more isolated. It is further shown that increasing habitat isolation desynchronises population oscillations between the patches, which in itself helps species to persist by dampening fluctuations on the landscape level. However, this is counteracted by an increasing strength of local population oscillations fuelled by an indirect effect of dispersal mortality on the feeding interactions. Last, a study is presented that introduces a novel mechanism for supporting diversity in metacommunities. It builds on the self-organised formation of spatial biomass patterns in the landscape, which leads to the emergence of spatio-temporally varying selection pressures that keep local communities permanently out of equilibrium and force them to continuously adapt. Because this mechanism relies on the spatial extension of the metacommunity, it is also sensitive to habitat change.
In the third part of the thesis, the consequences of biodiversity for the functioning of ecosystems are explored. The studies focus on standing stock biomass, biomass production, and trophic transfer efficiency as ecosystem functions. It is first shown that increasing the diversity of animal communities increases the total rate of intra-guild predation. However, the total biomass stock of the animal communities increases nevertheless, which also increases their exploitative pressure on the underlying plant communities. Despite this, the plant communities can maintain their standing stock biomass due to a shift of the body size spectra of both animal and plant communities towards larger species with a lower specific respiration rate. In another study it is further demonstrated that the generally positive relationship between diversity and the above mentioned ecosystem functions becomes steeper when not only the feeding interactions but also the numerous non-trophic interactions (like predator interference or competition for space) between the species of an ecosystem are taken into account. Finally, two studies are presented that demonstrate the power of functional diversity as explanatory variable. It is interpreted as the range spanned by functional traits of the species that determine their interactions. This approach allows to mechanistically understand how the ecosystem functioning of food webs with multiple trophic levels is affected by all parts of the food web and why a high functional diversity is required for efficient transportation of energy from primary producers to the top predators.
The general discussion draws some synthesising conclusions, e.g. on the predictive power of ecosystem functioning to explain diversity, and provides an outlook on future research directions.
Das Therapiemanagement bei Lipödem stellt auf Grund unzureichenden Wissensstandes in entscheidenden Aspekten eine besondere Herausforderung dar. Da die Pathogenese der Erkrankung nicht hinreichend geklärt ist und bislang kein pathognomonisches Diagnostikkriterium definiert wurde, beklagen viele Betroffene einen langjährigen Leidensweg bis zur Einleitung von Therapiemaßnahmen. Durch Steigerung der Awareness der Erkrankung in den letzten Jahren konnten die Intervalle bis zur korrekten Diagnose erfreulicherweise erheblich verkürzt werden. Obwohl die Zuordnung der Beschwerden zu einer klar definierten Erkrankung für viele Patientinnen eine Erleichterung ist, stellt die Erkenntnis über begrenzte Therapiemöglichkeiten häufig eine neuerliche Belastung dar.
Als Konsequenz der ungeklärten Pathogenese konnte bis dato keine kausale Therapie für das Lipödem definiert werden. Zu Beginn waren die Möglichkeiten konservativer Behandlungsstrategien nur eingeschränkt in den Rahmen eines allgemeingültigen Konzeptes involviert und insbesondere Limitationen nicht klar definiert. Obwohl in diversen Bereichen der Therapie weiterhin keine ausreichende Evidenz besteht, konnten durch eine systematische Aufarbeitung die grundsätzlichen Behandlungsoptionen in Relation zueinander gesetzt werden. Betroffene Patientinnen, sowie die verschiedenen in die Behandlung integrierte medizinische Disziplinen verfügen somit über einen grundsätzlichen Handlungsalgorithmus, deren Empfehlungen über einfache Rezeptierung von Lymphdrainage und Kompressionsbekleidung hinausgehen. Durch kritische Reflexion der geltenden Dogmata wurde ein interdisziplinärer Leitfaden vorgeschlagen, der auf nachvollziehbare Weise im Sinne eines Stufenschemas alle wesentlichen Therapiesäulen in einen allgemeingültigen Behandlungsplan einbindet.
Im vielschichten Management der Erkrankung verbleibt die operative Behandlung, die Liposuktion, allerdings häufig als „ultima ratio“ nach ausbleibender Linderung unter konservativen Therapiemaßnahmen. Die wesentliche Zielstellung der vorliegenden Arbeit konzentriert sich demnach auf die Optimierung des operativen Vorgehens in der Durchführung von Liposuktionen bei Patientinnen mit Lipödem und zeigt sowohl Grenzen der Indikationsstellung, als auch Potenzial des Behandlungserfolges im Langzeitverlauf auf. Langzeitergebnisse zeigen, dass die Liposuktion als sicherer Eingriff mit dem Potenzial einer nachhaltigen Symptomreduktion für Lipödem-Patientinnen angesehen werden kann. Betont werden soll zudem die Notwendigkeit der Verzahnung operativer Maßnahmen mit konservativen Therapien und somit die Integration der Liposuktion als sinnvolle Behandlungsalternative in ein klar umrissenes Therapiekonzept.
Methodisch greift die Arbeit auf insgesamt 10 Publikationen zurück. Die hier postulierte mehrzeitige Megaliposuktion zur Therapie des Lipödems, mit summierten Gesamtaspirationsvolumina über alle Eingriffe von bis zu 66.000 ml, konnte als evidenzbasiertes Therapieverfahren bestätigt und validiert werden. Die beschriebenen niedrigen Komplikationsraten sind unter Anderem Resultat einer differenzierten, individualisierten perioperativen Strategie. Neben der Berücksichtigung grundsätzlicher methodischer Prinzipien existieren allerdings vielfältige Variationen, deren Implikationen auf Komplikationsraten jeweils differenziert zu betrachten sind. Es existiert zwar kein Konsensus für ein allgemeingültiges Standardverfahren der Liposuktion, allerdings konnten zahlreiche Elemente im perioperativen Management definiert werden, die unabhängig von der verwendeten Operationstechnik einen potenziellen positiven Einfluss auf das Outcome haben. Obwohl die Liposuktion bei Lipödem somit zusammenfassend mittlerweile als sicheres Verfahren gelten kann, sind einige Aspekte weiterhin nicht abschließend geklärt. Hierbei stehen vor allem das Volumenmanagement und die standardisierte Festlegung des maximalen Aspirationsvolumens im Fokus.
Die Analyse verschiedener Kovariablen auf die Linderung Lipödem-assoziierter Symptome nach Liposuktion zeigt, dass Alter, Body-Mass-Index (BMI) und präoperatives Stadium der Erkrankung einen signifikanten Einfluss auf das postoperative Ergebnis haben und in der Planung des mehrzeitigen operativen Vorgehens berücksichtigt werden müssen. BMI- oder körpergewichtsabhängige Zielgrößen der Absaugvolumina waren als Prognosefaktor für das postoperative Outcome dagegen nicht relevant. Inwieweit dies möglicherweise an der Überschreitung des „notwendigen“ Volumengrenzwerts für adäquate Symptomlinderung durch reguläre Durchführung von Megaliposuktionen liegen könnte, oder ob dieser Parameter tatsächlich keinen Einfluss auf das Ergebnis nach Operation besitzt, konnte nicht abschließend geklärt werden.
Weiterhin konnte ein positiver Nutzen auf assoziierte Begleiterkrankungen bei Lipödem nachgewiesen werden. Das Spektrum der Behandlungsmethoden kann durch reguläre Integration der Liposuktion in das Therapieschema somit um eine nachhaltige Alternative sinnvoll ergänzt werden. Im Unterschied zur alleinigen konservativen Therapie kann hierdurch ein wesentlicher Schritt weg von der alleinigen symptomatischen Therapie gemacht werden. Zudem die vielfältige Symptomatik der diversen assoziierten Komorbiditäten zu berücksichtigen. Als Konsequenz und für die Notwendigkeit eines ganzheitlichen, interdisziplinären Therapieansatzes wäre der Terminus „Lipödem-Syndrom“ möglicherweise treffender und wird zur Diskussion gestellt.
Für ein gesondertes Patientenklientel wurden zudem basale Grundsätze im perioperativen Vorgehen differenziert aufgearbeitet. Lipödem-Patientinnen mit begleitendem von-Willebrand-Syndrom stellen im Hinblick auf Blutungskomplikationen eine außerordentliche Herausforderung dar. Die vorliegenden evidenzbasierten Empfehlungen zum Therapiemanagement dieser Patientinnen bei Eingriffen ähnlicher Risikoklassifizierung wurden systematisch aufgearbeitet und in Bezug zu den speziellen Anforderungen bei Megaliposuktionen gebracht. Das dabei erarbeitete Therapieschema wird die präoperative Detektion von Koagulopathien im Allgemeinen, sowie die perioperative Komplikationsrate bei von-Willebrand-Patientinnen im Speziellen zukünftig erheblich verbessern.
Zusammenfassend konnte somit ein allgemeingültiger Algorithmus für die moderne und langfristig erfolgreiche Therapie von Lipödem-Patientinnen mit besonderem Fokus auf die Megaliposuktion erarbeitet werden. Bei adäquatem perioperativem Management und Berücksichtigung der großen Volumenverschiebungen kann der Eingriff komplikationsarm und sicher durchgeführt werden. Nicht abschließend geklärt ist derzeit die Pathophysiologie der Erkrankung wobei eine immunologische Genese sowie die primäre Pathologie des Lymphgefäßsystems bzw. der Fett(vorläufer)zellen als Erklärungmodelle favorisiert werden. Die Entwicklung diagnostischer Biomarker sollte dabei verfolgt werden.
The habilitation deals with the numerical analysis of the recurrence properties of geological and climatic processes. The recurrence of states of dynamical processes can be analysed with recurrence plots and various recurrence quantification options. In the present work, the meaning of the structures and information contained in recurrence plots are examined and described. New developments have led to extensions that can be used to describe the recurring patterns in both space and time. Other important developments include recurrence plot-based approaches to identify abrupt changes in the system's dynamics, to detect and investigate external influences on the dynamics of a system, the couplings between different systems, as well as a combination of recurrence plots with the methodology of complex networks. Typical problems in geoscientific data analysis, such as irregular sampling and uncertainties, are tackled by specific modifications and additions. The development of a significance test allows the statistical evaluation of quantitative recurrence analysis, especially for the identification of dynamical transitions. Finally, an overview of typical pitfalls that can occur when applying recurrence-based methods is given and guidelines on how to avoid such pitfalls are discussed. In addition to the methodological aspects, the application potential especially for geoscientific research questions is discussed, such as the identification and analysis of transitions in past climates, the study of the influence of external factors to ecological or climatic systems, or the analysis of landuse dynamics based on remote sensing data.
In the present thesis, self-assembly of hydrophilic polymers, reinforced hydrogels and inorganic/polymer hybrids were examined. The thesis describes an avenue from polymer synthesis via various methods over polymer self-assembly to the formation of polymer materials that have promising properties for future applications.
Hydrophilic polymers were utilized to form multi-phase systems, water-in-water emulsions and self-assembled structures, e.g. particles/aggregates or hollow structures from completely water-soluble building blocks. The structuring of aqueous environments by hydrophilic homo and block copolymers was further utilized in the formation of supramolecular hydrogels with compartments or specific thermal behavior. Furthermore, inorganic graphitic carbon nitride (g-CN) was utilized as photoinitiator for hydrogel formation and as reinforcer for hydrogels. As such, hydrogels with remarkable mechanical properties were synthesized, e.g. high compressibility, high storage modulus or lubricity. In addition, g-CN was combined with polymers for a broad range of materials, e.g. coatings, films or latex, that could be utilized in photocatalytic applications. Another inorganic material class was combined with polymers in the present thesis as well, namely metal-organic frameworks (MOFs). It was shown that the pore structure of MOFs enables improved control over tacticity and achievement of high molar masses. Furthermore, MOF-based polymerization catalysis was introduced with improved control for coordinating monomers, catalyst recyclability and decreased metal contamination in the product. Finally, the effect of external influence on MOF morphology was studied, e.g. via solvent or polymer additives, which allowed the formation of various MOF structures.
Overall, advances in several areas of polymer science are presented in here. A major topic of the thesis was hydrophilic polymers and hydrogels that currently constitute significant materials in the polymer field due to promising future applications in biomedicine. Moreover, the combination of polymers with materials from other areas of research, i.e. g-CN and MOFs, provided various new materials with remarkable properties also of interest for applications in the future, e.g. coatings, particle structures and catalysis.
In this work, the basic principles of self-organization of diblock copolymers having the in¬herent property of selective or specific non-covalent binding were examined. By the introduction of electrostatic, dipole–dipole, or hydrogen bonding interactions, it was hoped to add complexity to the self-assembled mesostructures and to extend the level of ordering from the nanometer to a larger length scale. This work may be seen in the framework of biomimetics, as it combines features of synthetic polymer and colloid chemistry with basic concepts of structure formation applying in supramolecular and biological systems. The copolymer systems under study were (i) block ionomers, (ii) block copolymers with acetoacetoxy chelating units, and (iii) polypeptide block copolymers.
The role played by azobenzene polymers in the modern photonic, electronic and opto-mechanical applications cannot be underestimated. These polymers are successfully used to produce alignment layers for liquid crystalline fluorescent polymers in the display and semiconductor technology, to build waveguides and waveguide couplers, as data storage media and as labels in quality product protection. A very hot topic in modern research are light-driven artificial muscles based on azobenzene elastomers. The incorporation of azobenzene chromophores into polymer systems via covalent bonding or even by blending gives rise to a number of unusual effects under visible (VIS) and ultraviolet light irradiation. The most amazing effect is the inscription of surface relief gratings (SRGs) onto thin azobenzene polymer films. At least seven models have been proposed to explain the origin of the inscribing force but none of them describes satisfactorily the light induced material transport on the molecular level. In most models, to explain the mass transport over micrometer distances during irradiation at room temperature, it is necessary to assume a considerable degree of photoinduced softening, at least comparable with that at the glass transition. Contrary to this assumption, we have gathered a convincing evidence that there is no considerable softening of the azobenzene layers under illumination. Presently we can surely say that light induced softening is a very weak accompanying effect rather than a necessary condition for the formation of SRGs. This means that the inscribing force should be above the yield point of the azobenzene polymer. Hence, an appropriate approach to describe the formation and relaxation of SRGs is a viscoplastic theory. It was used to reproduce pulse-like inscription of SRGs as measured by VIS light scattering. At longer inscription times the VIS scattering pattern exhibits some peculiarities which can be explained by the appearance of a density grating that will be shown to arise due to the final compressibility of the polymer film. As a logical consequence of the aforementioned research, a thermodynamic theory explaining the light-induced deformation of free standing films and the formation of SRGs is proposed. The basic idea of this theory is that under homogeneous illumination an initially isotropic sample should stretch itself along the polarization direction to compensate the entropy decrease produced by the photoinduced reorientation of azobenzene chromophores. Finally, some ideas about further development of this controversial topic will be discussed.
Phonology limited
(2007)
Phonology Limited is a study of the areas of phonology where the application of optimality theory (OT) has previously been problematic. Evidence from a wide variety of phenomena in a wide variety of languages is presented to show that interactions involving more than just faithfulness and markedness are best analyzed as involving language-specific morphological constraints rather than universal phonological constraints. OT has proved to be a highly insightful and successful theory of linguistics in general and phonology in particular, focusing as it does on surface forms and treating the relationship between inputs and outputs as a form of conflict resolution. Yet there have also been a number of serious problems with the approach that have led some detractors to argue that OT has failed as a theory of generative grammar. The most serious of these problems is opacity, defined as a state of affairs where the grammatical output of a given input appears to violate more constraints than an ungrammatical competitor. It is argued that these problems disappear once language-specific morphological constraints are allowed to play a significant role in analysis. Specifically, a number of processes of Tiberian Hebrew traditionally considered opaque are reexamined and shown to be straightforwardly transparent, but crucially involving morphological constraints on form, such as a constraint requiring certain morphological forms to end with a syllabic trochee, or a constraint requiring paradigm uniformity with regard to the occurrence of fricative allophones of stop phonemes. Language-specific morphological constraints are also shown to play a role in allomorphy, where a lexeme is associated with more than one input; the constraint hierarchy then decides which input is grammatical in which context. For example, [ɨ]/[ə] and [u]/[ə] alternation found in some lexemes but not in others in Welsh is attributed to the presence of two inputs for the lexemes with the alternation. A novel analysis of the initial consonant mutations of the modern Celtic languages argues that mutated forms are separately listed inputs chosen in appropriate contexts by constraints on morphology and syntax, rather than being outputs that are phonologically unfaithful to their unmutated inputs. Finally, static irregularities and lexical exceptions are examined and shown to be attributable to language-specific morphological constraints. In American English, the distribution of tense and lax vowels is predictable in several contexts; however, in some contexts, the distributions of tense [ɔ] vs. lax [a] and of tense [æ] vs. lax [æ] are not as expected. It is shown that clusters of output-output faithfulness constraints create a pattern to which words are attracted, which however violates general phonological considerations. New words that enter the language first obey the general phonological considerations before being attracted into the language-specific exceptional pattern.
In a classical context, synchronization means adjustment of rhythms of self-sustained periodic oscillators due to their weak interaction. The history of synchronization goes back to the 17th century when the famous Dutch scientist Christiaan Huygens reported on his observation of synchronization of pendulum clocks: when two such clocks were put on a common support, their pendula moved in a perfect agreement. In rigorous terms, it means that due to coupling the clocks started to oscillate with identical frequencies and tightly related phases. Being, probably, the oldest scientifically studied nonlinear effect, synchronization was understood only in 1920-ies when E. V. Appleton and B. Van der Pol systematically - theoretically and experimentally - studied synchronization of triode generators. Since that the theory was well developed and found many applications. Nowadays it is well-known that certain systems, even rather simple ones, can exhibit chaotic behaviour. It means that their rhythms are irregular, and cannot be characterized only by one frequency. However, as is shown in the Habilitation work, one can extend the notion of phase for systems of this class as well and observe their synchronization, i.e., agreement of their (still irregular!) rhythms: due to very weak interaction there appear relations between the phases and average frequencies. This effect, called phase synchronization, was later confirmed in laboratory experiments of other scientific groups. Understanding of synchronization of irregular oscillators allowed us to address important problem of data analysis: how to reveal weak interaction between the systems if we cannot influence them, but can only passively observe, measuring some signals. This situation is very often encountered in biology, where synchronization phenomena appear on every level - from cells to macroscopic physiological systems; in normal states as well as in severe pathologies. With our methods we found that cardiovascular and respiratory systems in humans can adjust their rhythms; the strength of their interaction increases with maturation. Next, we used our algorithms to analyse brain activity of Parkinsonian patients. The results of this collaborative work with neuroscientists show that different brain areas synchronize just before the onset of pathological tremor. Morevoever, we succeeded in localization of brain areas responsible for tremor generation.
Synchronization of coupled oscillators manifests itself in many natural and man-made systems, including cyrcadian clocks, central pattern generators, laser arrays, power grids, chemical and electrochemical oscillators, only to name a few. The mathematical description of this phenomenon is often based on the paradigmatic Kuramoto model, which represents each oscillator by one scalar variable, its phase. When coupled, phase oscillators constitute a high-dimensional dynamical system, which exhibits complex behaviour, ranging from synchronized uniform oscillation to quasiperiodicity and chaos. The corresponding collective rhythms can be useful or harmful to the normal operation of various systems, therefore they have been the subject of much research.
Initially, synchronization phenomena have been studied in systems with all-to-all (global) and nearest-neighbour (local) coupling, or on random networks. However, in recent decades there has been a lot of interest in more complicated coupling structures, which take into account the spatially distributed nature of real-world oscillator systems and the distance-dependent nature of the interaction between their components. Examples of such systems are abound in biology and neuroscience. They include spatially distributed cell populations, cilia carpets and neural networks relevant to working memory. In many cases, these systems support a rich variety of patterns of synchrony and disorder with remarkable properties that have not been observed in other continuous media. Such patterns are usually referred to as the coherence-incoherence patterns, but in symmetrically coupled oscillator systems they are also known by the name chimera states.
The main goal of this work is to give an overview of different types of collective behaviour in large networks of spatially distributed phase oscillators and to develop mathematical methods for their analysis. We focus on the Kuramoto models for one-, two- and three-dimensional oscillator arrays with nonlocal coupling, where the coupling extends over a range wider than nearest neighbour coupling and depends on separation. We use the fact that, for a special (but still quite general) phase interaction function, the long-term coarse-grained dynamics of the above systems can be described by a certain integro-differential equation that follows from the mathematical approach called the Ott-Antonsen theory. We show that this equation adequately represents all relevant patterns of synchrony and disorder, including stationary, periodically breathing and moving coherence-incoherence patterns. Moreover, we show that this equation can be used to completely solve the existence and stability problem for each of these patterns and to reliably predict their main properties in many application relevant situations.
Our every-day experience is connected with different acoustical noise or music. Usually noise plays the role of nuisance in any communication and destroys any order in a system. Similar optical effects are known: strong snowing or raining decreases quality of a vision. In contrast to these situations noisy stimuli can also play a positive constructive role, e.g. a driver can be more concentrated in a presence of quiet music. Transmission processes in neural systems are of especial interest from this point of view: excitation or information will be transmitted only in the case if a signal overcomes a threshold. Dr. Alexei Zaikin from the Potsdam University studies noise-induced phenomena in nonlinear systems from a theoretical point of view. Especially he is interested in the processes, in which noise influences the behaviour of a system twice: if the intensity of noise is over a threshold, it induces some regular structure that will be synchronized with the behaviour of neighbour elements. To obtain such a system with a threshold one needs one more noise source. Dr. Zaikin has analyzed further examples of such doubly stochastic effects and developed a concept of these new phenomena. These theoretical findings are important, because such processes can play a crucial role in neurophysics, technical communication devices and living sciences.
Alfred Wegeners ideas on continental drift were doubted for several decades until the discovery of polarization changes at the Atlantic seafloor and the seismic catalogs imaging oceanic subduction underneath the continental crust (Wadati-Benioff Zone). It took another 20 years until plate motion could be directly observed and quantified by using space geodesy. Since then, it is unthinkable to do neotectonic research without the use of satellite-based methods.
Thanks to a tremendeous increase of instrumental observations in space and time over the last decades we significantly increased our knowledge on the complexity of the seismic cycle, that is, the interplay of tectonic stress build up and release. Our classical assumption, earthquakes were the only significant phenomena of strain release previously accumulated in a linear fashion, is outdated. We now know that this concept is actually decorated with a wide range of slow and fast processes such as triggered slip, afterslip, post-seismic and visco-elastic relaxation of the lower crust, dynamic pore-pressure changes in the elastic crust, aseismic creep, slow slip events and seismic swarms. On the basis of eleven peer-reviewed papers studies I here present the diversity of crustal deformation processes. Based on time-series analyses of radar imagery and satellited-based positioning data I quantify tectonic surface deformation and use numerical and analytical models and independent geologic and seismologic data to better understand the underlying crustal processes.
The main part of my work focuses on the deformation observed in the Pamir, the Hindu Kush and the Tian Shan that together build the highly active continental collision zone between Northwest-India and Eurasia. Centered around the Sarez earthquake that ruptured the center of the Pamir in 2015 I present diverse examples of crustal deformation phenomena. Driver of the deformation is the Indian indenter, bulldozing into the Pamir, compressing the orogen that then collapses westward into the Tajik depression. A second natural observatory of mine to study tectonic deformation is the oceanic subduction zone in Chile that repeatedly hosts large earthquakes of magnitude 8 and more. These are best to study post-seismic relaxation processes and coupling of large earthquake.
My findings nicely illustrate how complex fashion and how much the different deformation phenomena are coupled in space and time. My publications contribute to the awareness that the classical concept of the seismic cycle needs to be revised, which, in turn, has a large influence in the classical, probabilistic seismic hazard assessment that primarily relies on statistically solid recurrence times.
Continental rift systems open up unique possibilities to study the geodynamic system of our planet: geodynamic localization processes are imprinted in the morphology of the rift by governing the time-dependent activity of faults, the topographic evolution of the rift or by controlling whether a rift is symmetric or asymmetric. Since lithospheric necking localizes strain towards the rift centre, deformation structures of previous rift phases are often well preserved and passive margins, the end product of continental rifting, retain key information about the tectonic history from rift inception to continental rupture.
Current understanding of continental rift evolution is based on combining observations from active rifts with data collected at rifted margins. Connecting these isolated data sets is often accomplished in a conceptual way and leaves room for subjective interpretation. Geodynamic forward models, however, have the potential to link individual data sets in a quantitative manner, using additional constraints from rock mechanics and rheology, which allows to transcend previous conceptual models of rift evolution. By quantifying geodynamic processes within continental rifts, numerical modelling allows key insight to tectonic processes that operate also in other plate boundary settings, such as mid ocean ridges, collisional mountain chains or subduction zones.
In this thesis, I combine numerical, plate-tectonic, analytical, and analogue modelling approaches, whereas numerical thermomechanical modelling constitutes the primary tool. This method advanced rapidly during the last two decades owing to dedicated software development and the availability of massively parallel computer facilities. Nevertheless, only recently the geodynamical modelling community was able to capture 3D lithospheric-scale rift dynamics from onset of extension to final continental rupture.
The first chapter of this thesis provides a broad introduction to continental rifting, a summary of the applied rift modelling methods and a short overview of previews studies. The following chapters, which constitute the main part of this thesis feature studies on plate boundary dynamics in two and three dimension followed by global scale analyses (Fig. 1).
Chapter II focuses on 2D geodynamic modelling of rifted margin formation. It highlights the formation of wide areas of hyperextended crustal slivers via rift migration as a key process that affected many rifted margins worldwide. This chapter also contains a study of rift velocity evolution, showing that rift strength loss and extension velocity are linked through a dynamic feed-back. This process results in abrupt accelerations of the involved plates during rifting illustrating for the first time that rift dynamics plays a role in changing global-scale plate motions. Since rift velocity affects key processes like faulting, melting and lower crustal flow, this study also implies that the slow-fast velocity evolution should be imprinted in rifted margin structures.
Chapter III relies on 3D Cartesian rift models in order to investigate various aspects of rift obliquity. Oblique rifting occurs if the extension direction is not orthogonal to the rift trend. Using 3D lithospheric-scale models from rift initialisation to breakup I could isolate a characteristic evolution of dominant fault orientations. Further work in Chapter III addresses the impact of rift obliquity on the strength of the rift system. We illustrate that oblique rifting is mechanically preferred over orthogonal rifting, because the brittle yielding requires a lower tectonic force. This mechanism elucidates rift competition during South Atlantic rifting, where the more oblique Equatorial Atlantic Rift proceeded to breakup while the simultaneously active but less oblique West African rift system became a failed rift. Finally this Chapter also investigates the impact of a previous rift phase on current tectonic activity in the linkage area of the Kenyan with Ethiopian rift. We show that the along strike changes in rift style are not caused by changes in crustal rheology. Instead the rift linkage pattern in this area can be explained when accounting for the thinned crust and lithosphere of a Mesozoic rift event.
Chapter IV investigates rifting from the global perspective. A first study extends the oblique rift topic of the previous chapter to global scale by investigating the frequency of oblique rifting during the last 230 million years. We find that approximately 70% of all ocean-forming rift segments involved an oblique component of extension where obliquities exceed 20°. This highlights the relevance of 3D approaches in modelling, surveying, and interpretation of many rifted margins. In a final study, we propose a link between continental rift activity, diffuse CO2 degassing and Mesozoic/Cenozoic climate changes. We used recent CO2 flux measurements in continental rifts to estimate worldwide rift-related CO2 release, which we based on the global extent of rifts through time. The first-order correlation to paleo-atmospheric CO2 proxy data suggests that rifts constitute a major element of the global carbon cycle.
The Arctic plays a key role in Earth’s climate system as global warming is predicted to be most pronounced at high latitudes and because one third of the global carbon pool is stored in ecosystems of the northern latitudes. In order to improve our understanding of the present and future carbon dynamics in climate sensitive permafrost ecosystems, the present study concentrates on investigations of microbial controls of methane fluxes, on the activity and structure of the involved microbial communities, and on their response to changing environmental conditions. For this purpose an integrated research strategy was applied, which connects trace gas flux measurements to soil ecological characterisation of permafrost habitats and molecular ecological analyses of microbial populations. Furthermore, methanogenic archaea isolated from Siberian permafrost have been used as potential keystone organisms for studying and assessing life under extreme living conditions. Long-term studies on methane fluxes were carried out since 1998. These studies revealed considerable seasonal and spatial variations of methane emissions for the different landscape units ranging from 0 to 362 mg m-2 d-1. For the overall balance of methane emissions from the entire delta, the first land cover classification based on Landsat images was performed and applied for an upscaling of the methane flux data sets. The regionally weighted mean daily methane emissions of the Lena Delta (10 mg m-2 d-1) are only one fifth of the values calculated for other Arctic tundra environments. The calculated annual methane emission of the Lena Delta amounts to about 0.03 Tg. The low methane emission rates obtained in this study are the result of the used remotely sensed high-resolution data basis, which provides a more realistic estimation of the real methane emissions on a regional scale. Soil temperature and near soil surface atmospheric turbulence were identified as the driving parameters of methane emissions. A flux model based on these variables explained variations of the methane budget corresponding to continuous processes of microbial methane production and oxidation, and gas diffusion through soil and plants reasonably well. The results show that the Lena Delta contributes significantly to the global methane balance because of its extensive wetland areas. The microbiological investigations showed that permafrost soils are colonized by high numbers of microorganisms. The total biomass is comparable to temperate soil ecosystems. Activities of methanogens and methanotrophs differed significantly in their rates and distribution patterns along both the vertical profiles and the different investigated soils. The methane production rates varied between 0.3 and 38.9 nmol h-1 g-1, while the methane oxidation ranged from 0.2 to 7.0 nmol h-1 g-1. Phylogenetic analyses of methanogenic communities revealed a distinct diversity of methanogens affiliated to Methanomicrobiaceae, Methanosarcinaceae and Methanosaetaceae, which partly form four specific permafrost clusters. The results demonstrate the close relationship between methane fluxes and the fundamental microbiological processes in permafrost soils. The microorganisms do not only survive in their extreme habitat but also can be metabolic active under in situ conditions. It was shown that a slight increase of the temperature can lead to a substantial increase in methanogenic activity within perennially frozen deposits. In case of degradation, this would lead to an extensive expansion of the methane deposits with their subsequent impacts on total methane budget. Further studies on the stress response of methanogenic archaea, especially Methanosarcina SMA-21, isolated from Siberian permafrost, revealed an unexpected resistance of the microorganisms against unfavourable living conditions. A better adaptation to environmental stress was observed at 4 °C compared to 28 °C. For the first time it could be demonstrated that methanogenic archaea from terrestrial permafrost even survived simulated Martian conditions. The results show that permafrost methanogens are more resistant than methanogens from non-permafrost environments under Mars-like climate conditions. Microorganisms comparable to methanogens from terrestrial permafrost can be seen as one of the most likely candidates for life on Mars due to their physiological potential and metabolic specificity.
Fabricating electronic devices from natural, renewable resources has been a common goal in engineering and materials science for many years. In this regard, carbon is of special significance due to its biological compatibility. In the laboratory, carbonized materials and their composites have been proven as promising solutions for a range of future applications in electronics, optoelectronics, or catalytic systems. On the industrial scale, however, their application is inhibited by tedious and expensive preparation processes and a lack of control over the processing and material parameters. Therefore, we are exploring new concepts for the direct utilization of functional carbonized materials in electronic applications. In particular, laser-induced carbonization (carbon laser-patterning (CLaP)) is emerging as a new tool for the precise and selective synthesis of functional carbon-based materials for flexible on-chip applications.
We developed an integrated approach for on-the-spot laser-induced synthesis of flexible, carbonized films with specific functionalities. To this end, we design versatile precursor inks made from naturally abundant starting compounds and reactants to cast films which are carbonized with an infrared laser to obtain functional patterns of conductive porous carbon networks. In our studies we obtained deep mechanistic insights into the formation process and the microstructure of laser-patterned carbons (LP-C). We shed light on the kinetic reaction mechanism based on the interplay between the precursor properties and the reaction conditions. Furthermore, we investigated the use of porogens, additives, and reactants to provide a toolbox for the chemical and physical fine-tuning of the electronic and surface properties and the targeted integration of functional sites into the carbon network. Based on this knowledge, we developed prototype resistive chemical and mechanical sensors. In further studies, we show the applicability of LP-C as electrode materials in electrocatalytic and charge-storage applications.
To put our findings into a common perspective, our results are embedded into the context of general carbonization strategies, fundamentals of laser-induced materials processing, and a broad literature review on state-of-the-art laser-carbonization, in the general part.
Rivers have always flooded their floodplains. Over 2.5 billion people worldwide have been affected by flooding in recent decades. The economic damage is also considerable, averaging 100 billion US dollars per year. There is no doubt that damage and other negative effects of floods can be avoided. However, this has a price: financially and politically. Costs and benefits can be estimated through risk assessments. Questions about the location and frequency of floods, about the objects that could be affected and their vulnerability are of importance for flood risk managers, insurance companies and politicians. Thus, both variables and factors from the fields of hydrology and sociol-economics play a role with multi-layered connections. One example are dikes along a river, which on the one hand contain floods, but on the other hand, by narrowing the natural floodplains, accelerate the flood discharge and increase the danger of flooding for the residents downstream. Such larger connections must be included in the assessment of flood risk. However, in current procedures this is accompanied by simplifying assumptions. Risk assessments are therefore fuzzy and associated with uncertainties.
This thesis investigates the benefits and possibilities of new data sources for improving flood risk assessment. New methods and models are developed, which take the mentioned interrelations better into account and also quantify the existing uncertainties of the model results, and thus enable statements about the reliability of risk estimates. For this purpose, data on flood events from various sources are collected and evaluated. This includes precipitation and flow records at measuring stations as well as for instance images from social media, which can help to delineate the flooded areas and estimate flood damage with location information. Machine learning methods have been successfully used to recognize and understand correlations between floods and impacts from a wide range of data and to develop improved models.
Risk models help to develop and evaluate strategies to reduce flood risk. These tools also provide advanced insights into the interplay of various factors and on the expected consequences of flooding. This work shows progress in terms of an improved assessment of flood risks by using diverse data from different sources with innovative methods as well as by the further development of models. Flood risk is variable due to economic and climatic changes, and other drivers of risk. In order to keep the knowledge about flood risks up-to-date, robust, efficient and adaptable methods as proposed in this thesis are of increasing importance.