Refine
Year of publication
Document Type
- Habilitation Thesis (106) (remove)
Is part of the Bibliography
- yes (106) (remove)
Keywords
- Biophysik (3)
- biophysics (3)
- synchronization (3)
- Datenanalyse (2)
- Oberfläche (2)
- Resonanzenergietransfer (2)
- Selbstorganisation (2)
- Stochastische Prozesse (2)
- Synchronisation (2)
- Zelladhäsion (2)
Institute
- Institut für Physik und Astronomie (25)
- Institut für Biochemie und Biologie (17)
- Institut für Chemie (15)
- Institut für Geowissenschaften (12)
- Institut für Umweltwissenschaften und Geographie (7)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Ernährungswissenschaft (5)
- Institut für Romanistik (4)
- Department Psychologie (2)
- Extern (2)
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
This professorial dissertation thesis collects several empirical studies on tax distribution and tax reform in Germany. Chapter 2 deals with two studies on effective income taxation, based on representative micro data sets from tax statistics. The first study analyses the effective income taxation at the individual level, in particular with respect to the top incomes. It is based on an integrated micro data file of household survey data and income tax statistics, which captures the entire income distribution up to the very top. Despite substantial tax base erosion and reductions of top tax rates, the German personal income tax has remained effectively progressive. The distribution of the tax burden is highly concentrated and the German economic elite is still taxed relatively heavily, even though the effective tax rate for this group has significantly declined. The second study of Chapter 2 highlights the effective income taxation of functional income sources, such as labor income, business and capital income, etc. Using income tax micro data and microsimulation models, we allocate the individual income tax liability to the respective income sources, according to different apportionment schemes accounting for losses. We find that the choice of the apportionment scheme markedly affects the tax shares of income sources and implicit tax rates, in particular those of capital income. Income types without significant losses such as labor income or transfer incomes show higher tax shares and implicit tax rates if we account for losses. The opposite is true for capital income, in particular for income from renting and leasing. Chapter 3 presents two studies on business taxation, based on representative micro data sets from tax statistics and the microsimulation model BizTax. The first part provides a study on fundamental reform options for the German local business tax. We find that today’s high concentration of local business tax revenues on corporations with high profits decreases if the tax base is broadened by integrating more taxpayers and by including more elements of business value added. The reform scenarios with a broader tax base distribute the local business tax revenue per capita more equally across regional categories. The second study of Chapter 3 discusses the macroeconomic performance of business taxation against the background of corporate income. A comparison of the tax base reported in tax statistics with the macroeconomic corporate income from national accounts gives hints to considerable tax base erosion. The average implicit tax rate on corporate income was around 20 percent since 2001, and thus falling considerably short of statutory tax rates and effective tax rates discussed in the literature. For lack of detailed accounting data it is hard to give precise reasons for the presumptive tax base erosion. Chapter 4 deals with several assessment studies on the ecological tax reform implemented in Germany as of 1999. First, we describe the scientific, ideological, and political background of the ecological tax reform. Further, we present the main findings of a first systematic impact analysis. We employ two macroeconomic models, an econometric input-output model and a recursive-dynamic computable general equilibrium (CGE) model. Both models show that Germany’s ecological tax reform helps to reduce energy consumption and CO2 emissions without having a substantial adverse effect on overall economic growth. It could have a slightly positive effect on employment. The reform’s impact on the business sector and the effects of special provisions granted to agriculture and the goods and materials sectors are outlined in a further study. The special provisions avoid higher tax burdens on the energy-intensive production. However, they widely reduce the marginal tax rates and thus the incentives to energy saving. Though the reform of special provisions 2003 increased the overall tax burden of the energy-intensive industry, the enlarged eligibility for tax rebates neutralizes the ecologic incentives. Based on the Income and Consumption Survey of 2003, we have analyzed the distributional impact of the ecological tax reform. The increased energy taxes show a clear regressive impact relative to disposable income. Families with children face a higher tax burden relative to household income. The reduction of pension contributions and the automatic adjustment of social security transfers widely mitigate this regressive impact. Households with low income or with many children nevertheless bear a slight increase in tax burden. Refunding the eco tax revenue by an eco bonus would make the reform clearly progressive.
This cumulative habilitation thesis presents new work on the systematics, paleoecology, and evolution of antelopes and other large mammals, focusing mainly on the late Miocene to Pleistocene terrestrial fossil record of Africa and Arabia. The studies included here range from descriptions of new species to broad-scale analyses of diversification and community evolution in large mammals over millions of years. A uniting theme is the evolution, across both temporal and spatial scales, of the environments and faunas that characterize modern African savannas today. One conclusion of this work is that macroevolutionary changes in large mammals are best characterized at regional (subcontinental to continental) and long-term temporal scales. General views of evolution developed on records that are too restricted in spatial and temporal extent are likely to ascribe too much influence to local or short-lived events. While this distinction in the scale of analysis and interpretation may seem trivial, it is challenging to implement given the geographically and temporally uneven nature of the fossil record, and the difficulties of synthesizing spatially and temporally dispersed datasets. This work attempts to do just that, bringing together primary fossil discoveries from eastern Africa to Arabia, from the Miocene to the Pleistocene, and across a wide range of (mainly large mammal) taxa. The end result is support for hypotheses stressing the impact of both climatic and biotic factors on long-term faunal change, and a more geographically integrated view of evolution in the African fossil record.
Biogene Amine sind kleine organische Verbindungen, die sowohl bei Wirbeltieren als auch bei Wirbellosen als Neurotransmitter, Neuromodulatoren und/oder Neurohormone wirken können. Sie bilden eine bedeutende Gruppe von Botenstoffen und entfalten ihre Wirkungen über die Bindung an eine bestimmte Klasse von Rezeptorproteinen, die als G-Protein-gekoppelte Rezeptoren bezeichnet werden. Bei Insekten gehören zur Substanzklasse der biogenen Amine die Botenstoffe Dopamin, Tyramin, Octopamin, Serotonin und Histamin. Neben vielen anderen Wirkung ist z.B. gezeigt worden, daß einige dieser biogenen Amine bei der Honigbiene (Apis mellifera) die Geschmacksempfindlichkeit für Zuckerwasser-Reize modulieren können. Ich habe verschiedene Aspekte der aminergen Signaltransduktion an den „Modellorganismen“ Honigbiene und Amerikanische Großschabe (Periplaneta americana) untersucht. Aus der Honigbiene, einem „Modellorganismus“ für das Studium von Lern- und Gedächtnisvorgängen, wurden zwei Dopamin-Rezeptoren, ein Tyramin-Rezeptor, ein Octopamin-Rezeptor und ein Serotonin-Rezeptor charakterisiert. Die Rezeptoren wurden in kultivierten Säugerzellen exprimiert, um ihre pharmakologischen und funktionellen Eigenschaften (Kopplung an intrazelluläre Botenstoffwege) zu analysieren. Weiterhin wurde mit Hilfe verschiedener Techniken (RT-PCR, Northern-Blotting, in situ-Hybridisierung) untersucht, wo und wann während der Entwicklung die entsprechenden Rezeptor-mRNAs im Gehirn der Honigbiene exprimiert werden. Als Modellobjekt zur Untersuchung der zellulären Wirkungen biogener Amine wurden die Speicheldrüsen der Amerikanischen Großschabe genutzt. An isolierten Speicheldrüsen läßt sich sowohl mit Dopamin als auch mit Serotonin Speichelproduktion auslösen, wobei Speichelarten unterschiedlicher Zusammensetzung gebildet werden. Dopamin induziert die Bildung eines völlig proteinfreien, wäßrigen Speichels. Serotonin bewirkt die Sekretion eines proteinhaltigen Speichels. Die Serotonin-induzierte Proteinsekretion wird durch eine Erhöhung der Konzentration des intrazellulären Botenstoffs cAMP vermittelt. Es wurden die pharmakologischen Eigenschaften der Dopamin-Rezeptoren der Schaben-Speicheldrüsen untersucht sowie mit der molekularen Charakterisierung putativer aminerger Rezeptoren der Schabe begonnen. Weiterhin habe ich das ebony-Gen der Schabe charakterisiert. Dieses Gen kodiert für ein Enzym, das wahrscheinlich bei der Schabe (wie bei anderen Insekten) an der Inaktivierung biogener Amine beteiligt ist und im Gehirn und in den Speicheldrüsen der Schabe exprimiert wird.
Die klassische Physik/Chemie unterscheidet zwischen drei Bindungstypen: Der kovalenten Bindung, der ionischen Bindung und der metallischen Bindung. Moleküle untereinander werden hingegen durch schwache Wechselwirkungen zusammen gehalten, sie sind trotz ihrer schwachen Kräfte weniger verstanden, aber dabei nicht weniger wichtig. In zukunftsweisenden Gebieten wie der Nanotechnologie, der Supramolekularen Chemie und Biochemie sind sie von elementarer Bedeutung.
Um schwache, intermolekulare Wechselwirkungen zu beschreiben, vorauszusagen und zu verstehen, sind sie zunächst theoretisch zu erfassen. Hierzu gehören verschiedene quantenchemische Methoden, die in dieser Arbeit vorgestellt, verglichen, weiterentwickelt und schließlich auch exemplarisch auf Problemstellungen in der Chemie angewendet werden. Aufbauend auf einer Hierarchie von Methoden unterschiedlicher Genauigkeit werden sie für diese Ziele eingesetzt, ausgearbeitet und kombiniert.
Berechnet wird die Elektronenstruktur, also die Verteilung und Energie von Elektronen, die im Wesentlichen die Atome zusammen halten. Da Ungenauigkeiten von der Beschreibung der Elektronenstruktur von den verwendeten Methoden abhängen, kann man die Effekte detailliert untersuchen, sie beschreiben und darauf aufbauend weiter entwickeln, um sie anschließend an verschiedenen Modellen zu testen. Die Geschwindigkeit der Berechnungen mit modernen Computern ist eine wesentliche, zu berücksichtigende Komponente, da im Allgemeinen die Genauigkeit mit der Rechenzeit exponentiell steigt, und die damit an die Grenzen der Möglichkeiten stoßen muss.
Die genaueste der verwendeten Methoden basiert auf der Coupled-Cluster-Theorie, die sehr gute Voraussagen ermöglicht. Für diese wird eine sogenannte spektroskopische Genauigkeit mit Abweichungen von wenigen Wellenzahlen erzielt, was Vergleiche mit experimentellen Daten zeigen. Eine Möglichkeit zur Näherung von hochgenauen Methoden basiert auf der Dichtefunktionaltheorie: Hier wurde das „Boese-Martin for Kinetics“ (BMK)-Funktional entwickelt, dessen Funktionalform sich in vielen nach 2010 veröffentlichten Dichtefunktionalen wiederfindet.
Mit Hilfe der genaueren Methoden lassen sich schließlich semiempirische Kraftfelder zur Beschreibung intermolekularer Wechselwirkungen für individuelle Systeme parametrisieren, diese benötigen weit weniger Rechenzeit als die Methoden, die auf der genauen Berechnung der Elektronenstruktur von Molekülen beruhen.
Für größere Systeme lassen sich auch verschiedene Methoden kombinieren. Dabei wurden Einbettungsverfahren verfeinert und mit neuen methodischen Ansätzen vorgeschlagen. Sie verwenden sowohl die symmetrieadaptierte Störungstheorie als auch die quantenchemische Einbettung von Fragmenten in größere, quantenchemisch berechnete Systeme.
Die Entwicklungen neuer Methoden beziehen ihren Wert im Wesentlichen durch deren Anwendung:
In dieser Arbeit standen zunächst die Wasserstoffbrücken im Vordergrund. Sie zählen zu den stärkeren intermolekularen Wechselwirkungen und sind nach wie vor eine Herausforderung. Im Gegensatz dazu sind van-der-Waals Wechselwirkungen relativ einfach durch Kraftfelder zu beschreiben. Deshalb sind viele der heute verwendeten Methoden für Systeme, in denen Wasserstoffbrücken dominieren, vergleichsweise schlecht.
Eine Untersuchung molekularer Aggregate mit Auswirkungen intermolekularer Wechselwirkungen auf die Schwingungsfrequenzen von Molekülen schließt sich an. Dabei wird auch über die sogenannte starrer-Rotor-harmonischer-Oszillator-Näherung hinausgegangen.
Eine weitreichende Anwendung behandelt Adsorbate, hier die von Molekülen auf ionischen/metallischen Oberflächen. Sie können mit ähnlichen Methoden behandelt werden wie die intermolekularen Wechselwirkungen, und sind mit speziellen Einbettungsverfahren sehr genau zu beschreiben. Die Resultate dieser theoretischen Berechnungen stimulierten eine Neubewertung der bislang bekannten experimentellen Ergebnisse.
Molekulare Kristalle sind ein äußerst wichtiges Forschungsgebiet. Sie werden durch schwache Wechselwirkungen zusammengehalten, die von van-der-Waals Kräften bis zu Wasserstoffbrücken reichen. Auch hier wurden neuentwickelte Methoden eingesetzt, die eine interessante, mindestens ebenso genaue Alternative zu den derzeit gängigen Methoden darstellen.
Von daher sind die entwickelten Methoden, als auch deren Anwendung äußerst vielfältig. Die behandelten Berechnungen der Elektronenstruktur erstrecken sich von den sogenannten post-Hartree-Fock-Methoden über den Einsatz der Dichtefunktionaltheorie bis zu semiempirischen Kraftfeldern und deren Kombinationen. Die Anwendung reicht von einzelnen Molekülen in der Gasphase über die Adsorption auf Oberflächen bis zum molekularen Festkörper.
Individuals differ in their tendency to perceive injustice and in their responses towards these perceptions. Those high in justice sensitivity tend to show intense negative affective, cognitive, and behavioral responses towards injustice that in part also depend on the perspective from which injustice is perceived. The present research project showed that inter-individual differences in justice sensitivity may already be measured and observed in childhood and adolescence and that early adolescence seems an important age-range and developmental stage for the stabilization of these differences. Furthermore, the different justice sensitivity perspectives were related to different forms of externalizing (aggression, ADHD, bullying) and internalizing problem behavior (depressive symptoms) both in children and adolescents as well as in adults in cross-sectional studies. Particularly victim sensitivity may apparently constitute an important risk factor for a broad range of both externalizing and internalizing maladaptive behaviors and mental health problems as shown in those studies using longitudinal data. Regarding aggressive behavior, victim justice sensitivity may even constitute a risk factor above and beyond other important and well-established risk factors for aggression and similar sensitivity constructs that had previously been linked to this kind of behavior. In contrast, observer and perpetrator sensitivity (perpetrator sensitivity in particular) tended to show negative links with externalizing problem behavior and instead predicted prosocial behavior in children and adolescents. However, there were also detached positive relations of perpetrator sensitivity with emotional problems as well as of observer sensitivity with reactive aggression and depressive symptoms. Taken together, the findings from the present research show that justice sensitivity forms in childhood at the latest and that it may have important, long-term influences on pro- and antisocial behavior and mental health. Thus, justice sensitivity requires more attention in research on the prevention and intervention of mental health problems and antisocial behavior, such as aggression.
Parsability approaches of several grammar formalisms generating also non-context-free languages are explored. Chomsky grammars, Lindenmayer systems, grammars with controlled derivations, and grammar systems are treated. Formal properties of these mechanisms are investigated, when they are used as language acceptors. Furthermore, cooperating distributed grammar systems are restricted so that efficient deterministic parsing without backtracking becomes possible. For this class of grammar systems, the parsing algorithm is presented and the feature of leftmost derivations is investigated in detail.
This habilitation thesis summarises the research work performed by the author during the last quindecennial period. The dissertation reflects his main research interests, which revolve around quantum dynamics of small-sized molecular systems, including their interactions with electromagnetic radiation or dissipative environments. This covers various dynamical processes that involve bound-bound, bound-free, and free-free molecular transitions. The latter encompass light-triggered rovibrational or rovibronic dynamics in bound molecules, molecular photodissociation induced by weak or strong laser fields, state-to-state reactive and/or inelastic molecular collisions, and phonon-driven vibrational relaxation of adsorbates at solid surfaces. Although the dissertation covers different topics of molecular reaction dynamics, most of these studies focus on nuclear quantum effects and their manifestations in experimental measures. The latter are assessed through comparison between quantum and classical predictions, and/or direct confrontation of theory and experiment. Most well known quantum concepts and effects will be encountered in this work. Yet, almost all these quantum notions find their roots in the central pillar of quantum theory, namely, the quantum superposition principle. Indeed, quantum coherence is the main source of most quantum effects, including interference, entanglement, and even tunneling. Thus, the common and predominant theme of all the investigations of this thesis is quantum coherence, and the survival or quenching of subsequent interference effects in various molecular processes. The lion's share of the dissertation is devoted to two associated quantum concepts, which are usually overlooked in computational molecular dynamics, viz. the Berry phase and identical nuclei symmetry. The importance of the latter in dynamical molecular processes and their direct fingerprints in experimental observables also rely very much on quantum coherence and entanglement. All these quantum phenomena are thoroughly discussed within the four main topics that form the core of this thesis. Each topic is described in a separate chapter, where it is briefly summarised and then illustrated with three peer-reviewed publications. The first topic deals with the relevance of quantum coherence/interference in molecular collisions, with a focus on the hydrogen-exchange reaction, H+H2 --> H2+H, and its isotopologues. For these collision processes, the significance of interference of probability amplitudes arises because of the existence of two main scattering pathways. The latter could be inelastic and reactive scattering, direct and time-delayed scattering, or two encircling reaction paths that loop in opposite senses around a conical intersection (CI) of the H3 molecular system. Our joint theoretical-experimental investigations of these processes reveal strong interference and geometric phase (GP) effects in state-to-state reaction probabilities and differential cross sections. However, these coherent effects completely cancel in integral cross sections and reaction rate constants, due to efficient dephasing of interference between the different scattering amplitudes. As byproducts of these studies, we highlight the discovery of two novel scattering mechanisms, which contradict conventional textbook pictures of molecular reaction dynamics. The second topic concerns the effect of the Berry phase on molecular photodynamics at conical intersections. To understand this effect, we developed a topological approach that separates the total molecular wavefunction of an unbound molecular system into two components, which wind in opposite senses around the conical intersection. This separation reveals that the only effect of the geometric phase is to change the sign of the relative phase of these two components. This in turn leads to a shift in the interference pattern of the molecular system---a phase shift that is reminiscient of the celebrated Aharonov-Bohm effect. This procedure is numerically illustrated with photodynamics at model standard CIs, as well as strong-field dissociation of diatomics at light-induced conical intersections (LICIs). Besides the fundamental aspect of these studies, their findings allow to interpret and predict the effect of the GP on the state-resolved or angle-resolved spectra of pump-probe experimental schemes, particularly the distributions of photofragments in molecular photodissociation experiments. The third topic pertains to the role of the indistinguishability of identical nuclei in molecular reaction dynamics, with an emphasis on dynamical localization in highly symmetric molecules. The main object of these studies is whether nuclear-spin statistics allow dynamical localization of the electronic, vibrational, or even rotational density on a specific molecular substructure or configuration rather than on another one which is identical (indistinguishable). Group-theoretic analysis of the symmetrized molecular wavefunctions of these systems shows that nuclear permutation symmetry engenders quantum entanglement between the eigenstates of the different molecular degrees of freedom. This subsequently leads to complete quenching of dynamical localization over indistinguishable molecular substructures---an observation that is in sharp contradiction with well known textbook views of iconic molecular processes. This is illustrated with various examples of quantum dynamics in symmetric double-well achiral molecules, such as the prototypical umbrella inversion motion of ammonia, electronic Kekulé dynamics in the benzene molecule, and coupled electron-nuclear dynamics in laser-induced indirect photodissociation of the dihydrogen molecular cation. The last part of the thesis is devoted to the development of approximate wavefunction approaches for phonon-induced vibrational relaxation of adsorbates (system) at surfaces (bath). Due to the so-called 'curse of dimensionality', these system-bath complexes cannot be handled with standard wavefunction methods. To alleviate the exponential scaling of the latter, we developed approximate yet quite accurate numerical schemes that have a polynomial scaling with respect to the bath dimensionality. The corresponding algorithms combine symmetry-based reductions of the full vibrational Hilbert space and iterative Krylov techniques. These approximate wavefunction approaches resemble the 'Bixon-Jortner model' and the more general 'quantum tier model'. This is illustrated with the decay of H-Si (D-Si) vibrations on a fully H(D)-covered silicon surface, which is modelled with a phonon-bath of more than two thousand oscillators. These approximate methods allow reliable estimation of the adsorbate vibrational lifetimes, and provide some insight into vibration-phonon couplings at solid surfaces. Although this topic is mainly computational, the developed wavefunction approaches permit to describe quantum entanglement between the system and bath states, and to embody some coherent effects in the time-evolution of the (sub-)system, which cannot be accounted for with the widely used 'reduced density matrix formalism'.
The direct conversion of light from the sun into usable forms of energy marks one of the central cornerstones of the change of our living from the use of fossil, non-renewable energy resources towards a more sustainable economy. Besides the necessary societal changes necessary, it is the understanding of the solids employed that is of particular importance for the success of this target. In this work, the principles and approaches of systematic-crystallographic characterisation and systematisation of solids is used and employed to allow a directed tuning of the materials properties. The thorough understanding of the solid-state forms hereby the basis, on which more applied approaches are founded.
Two material systems, which are considered as promising solar absorber materials, are at the core of this work: halide perovskites and II-IV-N2 nitride materials. While the first is renowned for its high efficiencies and rapid development in the last years, the latter is putting an emphasis on true sustainability in that toxic and scarce elements are avoided.
Continental rift systems open up unique possibilities to study the geodynamic system of our planet: geodynamic localization processes are imprinted in the morphology of the rift by governing the time-dependent activity of faults, the topographic evolution of the rift or by controlling whether a rift is symmetric or asymmetric. Since lithospheric necking localizes strain towards the rift centre, deformation structures of previous rift phases are often well preserved and passive margins, the end product of continental rifting, retain key information about the tectonic history from rift inception to continental rupture.
Current understanding of continental rift evolution is based on combining observations from active rifts with data collected at rifted margins. Connecting these isolated data sets is often accomplished in a conceptual way and leaves room for subjective interpretation. Geodynamic forward models, however, have the potential to link individual data sets in a quantitative manner, using additional constraints from rock mechanics and rheology, which allows to transcend previous conceptual models of rift evolution. By quantifying geodynamic processes within continental rifts, numerical modelling allows key insight to tectonic processes that operate also in other plate boundary settings, such as mid ocean ridges, collisional mountain chains or subduction zones.
In this thesis, I combine numerical, plate-tectonic, analytical, and analogue modelling approaches, whereas numerical thermomechanical modelling constitutes the primary tool. This method advanced rapidly during the last two decades owing to dedicated software development and the availability of massively parallel computer facilities. Nevertheless, only recently the geodynamical modelling community was able to capture 3D lithospheric-scale rift dynamics from onset of extension to final continental rupture.
The first chapter of this thesis provides a broad introduction to continental rifting, a summary of the applied rift modelling methods and a short overview of previews studies. The following chapters, which constitute the main part of this thesis feature studies on plate boundary dynamics in two and three dimension followed by global scale analyses (Fig. 1).
Chapter II focuses on 2D geodynamic modelling of rifted margin formation. It highlights the formation of wide areas of hyperextended crustal slivers via rift migration as a key process that affected many rifted margins worldwide. This chapter also contains a study of rift velocity evolution, showing that rift strength loss and extension velocity are linked through a dynamic feed-back. This process results in abrupt accelerations of the involved plates during rifting illustrating for the first time that rift dynamics plays a role in changing global-scale plate motions. Since rift velocity affects key processes like faulting, melting and lower crustal flow, this study also implies that the slow-fast velocity evolution should be imprinted in rifted margin structures.
Chapter III relies on 3D Cartesian rift models in order to investigate various aspects of rift obliquity. Oblique rifting occurs if the extension direction is not orthogonal to the rift trend. Using 3D lithospheric-scale models from rift initialisation to breakup I could isolate a characteristic evolution of dominant fault orientations. Further work in Chapter III addresses the impact of rift obliquity on the strength of the rift system. We illustrate that oblique rifting is mechanically preferred over orthogonal rifting, because the brittle yielding requires a lower tectonic force. This mechanism elucidates rift competition during South Atlantic rifting, where the more oblique Equatorial Atlantic Rift proceeded to breakup while the simultaneously active but less oblique West African rift system became a failed rift. Finally this Chapter also investigates the impact of a previous rift phase on current tectonic activity in the linkage area of the Kenyan with Ethiopian rift. We show that the along strike changes in rift style are not caused by changes in crustal rheology. Instead the rift linkage pattern in this area can be explained when accounting for the thinned crust and lithosphere of a Mesozoic rift event.
Chapter IV investigates rifting from the global perspective. A first study extends the oblique rift topic of the previous chapter to global scale by investigating the frequency of oblique rifting during the last 230 million years. We find that approximately 70% of all ocean-forming rift segments involved an oblique component of extension where obliquities exceed 20°. This highlights the relevance of 3D approaches in modelling, surveying, and interpretation of many rifted margins. In a final study, we propose a link between continental rift activity, diffuse CO2 degassing and Mesozoic/Cenozoic climate changes. We used recent CO2 flux measurements in continental rifts to estimate worldwide rift-related CO2 release, which we based on the global extent of rifts through time. The first-order correlation to paleo-atmospheric CO2 proxy data suggests that rifts constitute a major element of the global carbon cycle.
Und der Zukunft abgewandt
(2010)
Seit dem Ende der DDR, das den Zusammenbruch des Ostblocks und damit die Beendigung des »Kalten Kriegs« einleitete, wird verstärkt versucht, das Wesen dieses Staates zu definieren und damit seine Folgen auf wirtschaftlicher, sozialer, psychologischer und bildungspolitischer Ebene zu verstehen und einzuordnen. Alexandra Budke analysiert in diesem Band das Schulfach Geographie, das neben der Staatsbürgerkunde und der Geschichte ein zentrales Fach war und in dem die in den Lehrplänen definierte »staatsbürgerliche, weltanschauliche oder ideologische Erziehung« auf der Grundlage des Marxismus-Leninismus stattfinden sollte. Sie klärt, inwiefern Geographieunterricht in der DDR genutzt wurde, um geopolitische Interessen des Staates zu kommunizieren und zu verbreiten. Damit lässt sich durch die detaillierte Analyse des Fachunterrichts auch die Frage beantworten, ob SchülerInnen im Unterricht politisch manipuliert wurden und welche Handlungsmöglichkeiten die zentralen Akteure des Unterrichts, die LehrerInnen und die SchülerInnen, im Rahmen der durch die Bildungspolitik gesetzten curricularen Vorgaben wahrgenommen haben.
Controlling interactions in synthetic polymers as precisely as in proteins would have a strong impact on polymer science. Advanced structural and functional control can lead to rational design of, integrated nano- and microstructures. To achieve this, properties of monomer sequence defined oligopeptides were exploited. Through their incorporation as monodisperse segments into synthetic polymers we learned in recent four years how to program the structure formation of polymers, to adjust and exploit interactions in such polymers, to control inorganic-organic interfaces in fiber composites and induce structure in Biomacromolecules like DNA for biomedical applications.
Die Plastizität der Gefühle
(2021)
Das emotionale Leben wird zunehmend durch digitale Technologien ausgelesen, reguliert und produziert. Diese gleichermaßen von Hoffnungen und Ängsten begleitete Entwicklung ist die vorerst letzte Station einer bis in die Frühgeschichte zurückreichenden, tiefgehenden Verschränkung von Affekt und (Kultur-)Technik. Bernd Bösel eröffnet einen umfassenden genealogischen Blick auf die epochenmachenden Neujustierungen dieser Technisierung. Denn erst im Nachvollzug der verschiedenen Logiken des Verfügens über Affekte wird es möglich, die Verflechtung der Technisierungsformen zu verstehen, auf denen die Psychomacht der Gegenwart basiert.
Klinische Analyse der physiologischen und pathologischen Sehnenadaptation an sportliche Belastung
(2021)
Devotio malefica
(2021)
Antike Fluchrituale zielten darauf ab, die jeweilige Gerechtigkeitsvorstellung der Verfluchenden durchzusetzen – insbesondere wenn weder das öffentliche Justizsystem noch gesellschaftlich anerkannte Verhaltenskodize dem Anspruch gerecht werden konnten. In den Ritualen kamen sogenannte defixionis tabellae (Fluchtafeln) zur Anwendung, die hier devotiones maleficae genannt werden. Sie bestehen meistens aus eingeschriebenen Bleilamellen und wurden für die Beschädigung eines oder mehrerer Opfer angefertigt.
Sara Chiarini untersucht die dabei verwendete Fluchsprache, die durch ihre formelhaften Strukturen und Bestandteile auf eine Tradition des Fluchrituals hindeuten. Individuelle Ergänzungen bieten hingegen Hinweise auf die Bedingungen um die Entstehung des Rituals, die Gefühlslage der Verfluchenden und die Arten von Bestrafungen, die der rechtlichen Dimension des Rituals entsprechen. Chiarini ergänzt den bisherigen Forschungstand anhand der neu entdeckten und veröffentlichten Fluchtafeln und setzt sich umfassend mit diesem epigraphischen Material auseinander.
Eco-physiological processes are expressing the interaction of organisms within an environmental context of their habitat and their degree of adaptation, level of resistance as well as the limits of life in a changing environment. The present study focuses on observations achieved by methods used in this scientific discipline of “Ecophysiology” and to enlarge the scientific context in a broader range of understanding with universal character. The present eco-physiological work is building the basis for classifying and exploring the degree of habitability of another planet like Mars by a bio-driven experimentally approach. It offers also new ways of identifying key-molecules which are playing a specific role in physiological processes of tested organisms to serve as well as potential biosignatures in future space exploration missions with the goal to search for life. This has important implications for the new emerging scientific field of Astrobiology. Astrobiology addresses the study of the origin, evolution, distribution and future of life in the universe. The three fundamental questions which are hidden behind this definition are: how does life begin and evolve? Is there life beyond Earth and, if so, how can we detect it? What is the future of life on Earth and in the universe? It means that this multidisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System. It comprises the search for the evidence of prebiotic chemistry and life on Mars and other bodies in our Solar System like the icy moons of the Jovian and Saturnian system, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in space. For this purpose an integrated research strategy was applied, which connects field research, laboratory research allowing planetary simulation experiments with investigation enterprises performed in space (particularly performed in the low Earth Orbit.
Kaliumionen (K<sup>+) sind die am häufigsten vorkommenden anorganischen Kationen in Pflanzen. Gemessen am Trockengewicht kann ihr Anteil bis zu 10% ausmachen. Kaliumionen übernehmen wichtige Funktionen in verschiedenen Prozessen in der Pflanze. So sind sie z.B. essentiell für das Wachstum und für den Stoffwechsel. Viele wichtige Enzyme arbeiten optimal bei einer K<sup>+ Konzentration im Bereich von 100 mM. Aus diesem Grund halten Pflanzenzellen in ihren Kompartimenten, die am Stoffwechsel beteiligt sind, eine kontrollierte Kaliumkonzentration von etwa 100 mM aufrecht. Die Aufnahme von Kaliumionen aus dem Erdreich und deren Transport innerhalb der Pflanze und innerhalb einer Pflanzenzelle wird durch verschiedene Kaliumtransportproteine ermöglicht. Die Aufrechterhaltung einer stabilen K<sup>+ Konzentration ist jedoch nur möglich, wenn die Aktivität dieser Transportproteine einer strikten Kontrolle unterliegt. Die Prozesse, die die Transportproteine regulieren, sind bis heute nur ansatzweise verstanden. Detailliertere Kenntnisse auf diesem Gebiet sind aber von zentraler Bedeutung für das Verständnis der Integration der Transportproteine in das komplexe System des pflanzlichen Organismus. In dieser Habilitationsschrift werden eigene Publikationen zusammenfassend dargestellt, in denen die Untersuchungen verschiedener Regulationsmechanismen pflanzlicher Kaliumkanäle beschrieben werden. Diese Untersuchungen umfassen ein Spektrum aus verschiedenen proteinbiochemischen, biophysikalischen und pflanzenphysiologischen Analysen. Um die Regulationsmechanismen grundlegend zu verstehen, werden zum einen ihre strukturellen und molekularen Besonderheiten untersucht. Zum anderen werden die biophysikalischen und reaktionskinetischen Zusammenhänge der Regulationsmechanismen analysiert. Die gewonnenen Erkenntnisse erlauben eine neue, detailliertere Interpretation der physiologischen Rolle der Kaliumtransportproteine in der Pflanze.
Biological materials, in addition to having remarkable physical properties, can also change shape and volume. These shape and volume changes allow organisms to form new tissue during growth and morphogenesis, as well as to repair and remodel old tissues. In addition shape or volume changes in an existing tissue can lead to useful motion or force generation (actuation) that may even still function in the dead organism, such as in the well known example of the hygroscopic opening or closing behaviour of the pine cone. Both growth and actuation of tissues are mediated, in addition to biochemical factors, by the physical constraints of the surrounding environment and the architecture of the underlying tissue. This habilitation thesis describes biophysical studies carried out over the past years on growth and swelling mediated shape changes in biological systems. These studies use a combination of theoretical and experimental tools to attempt to elucidate the physical mechanisms governing geometry controlled tissue growth and geometry constrained tissue swelling. It is hoped that in addition to helping understand fundamental processes of growth and morphogenesis, ideas stemming from such studies can also be used to design new materials for medicine and robotics.
Biological materials have ever been used by humans because of their remarkable properties. This is surprising since the materials are formed under physiological conditions and with commonplace constituents. Nature thus not only provides us with inspiration for designing new materials but also teaches us how to use soft molecules to tune interparticle and external forces to structure and assemble simple building blocks into functional entities. Magnetotactic bacteria and their chain of magnetosomes represent a striking example of such an accomplishment where a very simple living organism controls the properties of inorganics via organics at the nanometer-scale to form a single magnetic dipole that orients the cell in the Earth magnetic field lines. My group has developed a biological and a bio-inspired research based on these bacteria. My research, at the interface between chemistry, materials science, physics, and biology focuses on how biological systems synthesize, organize and use minerals. We apply the design principles to sustainably form hierarchical materials with controlled properties that can be used e.g. as magnetically directed nanodevices towards applications in sensing, actuating, and transport. In this thesis, I thus first present how magnetotactic bacteria intracellularly form magnetosomes and assemble them in chains. I developed an assay, where cells can be switched from magnetic to non-magnetic states. This enabled to study the dynamics of magnetosome and magnetosome chain formation. We found that the magnetosomes nucleate within minutes whereas chains assembles within hours. Magnetosome formation necessitates iron uptake as ferrous or ferric ions. The transport of the ions within the cell leads to the formation of a ferritin-like intermediate, which subsequently is transported and transformed within the magnetosome organelle in a ferrihydrite-like precursor. Finally, magnetite crystals nucleate and grow toward their mature dimension. In addition, I show that the magnetosome assembly displays hierarchically ordered nano- and microstructures over several levels, enabling the coordinated alignment and motility of entire populations of cells. The magnetosomes are indeed composed of structurally pure magnetite. The organelles are partly composed of proteins, which role is crucial for the properties of the magnetosomes. As an example, we showed how the protein MmsF is involved in the control of magnetosome size and morphology. We have further shown by 2D X-ray diffraction that the magnetosome particles are aligned along the same direction in the magnetosome chain. We then show how magnetic properties of the nascent magnetosome influence the alignment of the particles, and how the proteins MamJ and MamK coordinate this assembly. We propose a theoretical approach, which suggests that biological forces are more important than physical ones for the chain formation. All these studies thus show how magnetosome formation and organization are under strict biological control, which is associated with unprecedented material properties. Finally, we show that the magnetosome chain enables the cells to find their preferred oxygen conditions if the magnetic field is present. The synthetic part of this work shows how the understanding of the design principles of magnetosome formation enabled me to perform biomimetic synthesis of magnetite particles within the highly desired size range of 25 to 100 nm. Nucleation and growth of such particles are based on aggregation of iron colloids termed primary particles as imaged by cryo-high resolution TEM. I show how additives influence magnetite formation and properties. In particular, MamP, a so-called magnetochrome proteins involved in the magnetosome formation in vivo, enables the in vitro formation of magnetite nanoparticles exclusively from ferrous iron by controlling the redox state of the process. Negatively charged additives, such as MamJ, retard magnetite nucleation in vitro, probably by interacting with the iron ions. Other additives such as e.g. polyarginine can be used to control the colloidal stability of stable-single domain sized nanoparticles. Finally, I show how we can “glue” magnetic nanoparticles to form propellers that can be actuated and swim with the help of external magnetic fields. We propose a simple theory to explain the observed movement. We can use the theoretical framework to design experimental conditions to sort out the propellers depending on their size and effectively confirm this prediction experimentally. Thereby, we could image propellers with size down to 290 nm in their longer dimension, much smaller than what perform so far.
Line driven winds are accelerated by the momentum transfer from photons to a plasma, by absorption and scattering in numerous spectral lines. Line driving is most efficient for ultraviolet radiation, and at plasma temperatures from 10^4 K to 10^5 K. Astronomical objects which show line driven winds include stars of spectral type O, B, and A, Wolf-Rayet stars, and accretion disks over a wide range of scales, from disks in young stellar objects and cataclysmic variables to quasar disks. It is not yet possible to solve the full wind problem numerically, and treat the combined hydrodynamics, radiative transfer, and statistical equilibrium of these flows. The emphasis in the present writing is on wind hydrodynamics, with severe simplifications in the other two areas. I consider three topics in some detail, for reasons of personal involvement. 1. Wind instability, as caused by Doppler de-shadowing of gas parcels. The instability causes the wind gas to be compressed into dense shells enclosed by strong shocks. Fast clouds occur in the space between shells, and collide with the latter. This leads to X-ray flashes which may explain the observed X-ray emission from hot stars. 2. Wind runaway, as caused by a new type of radiative waves. The runaway may explain why observed line driven winds adopt fast, critical solutions instead of shallow (or breeze) solutions. Under certain conditions the wind settles on overloaded solutions, which show a broad deceleration region and kinks in their velocity law. 3. Magnetized winds, as launched from accretion disks around stars or in active galactic nuclei. Line driving is assisted by centrifugal forces along co-rotating poloidal magnetic field lines, and by Lorentz forces due to toroidal field gradients. A vortex sheet starting at the inner disk rim can lead to highly enhanced mass loss rates.
Highly collimated, high velocity streams of hot plasma – the jets – are observed as a general phenomenon being found in a variety of astrophysical objects regarding their size and energy output. Known as jet sources are protostellar objects (T Tauri stars, embedded IR sources), galactic high energy sources ("microquasars"), and active galactic nuclei (extragalactic radio sources and quasars). Within the last two decades our knowledge regarding the processes involved in astro-physical jet formation has condensed in a kind of standard model. This is the scenario of a magnetohydrodynamically accelerated and collimated jet stream launched from the innermost part of an accretion disk close to the central object. Traditionally, the problem of jet formation is divided in two categories. One is the question how to collimate and accelerate an uncollimated low velocity disk wind into a jet. The second is the question how to initiate that outflow from a disk, i.e. how to turn accretion of matter into an ejection as a disk wind. My own work is mainly related to the first question, the collimation and acceleration process. Due to the complexity of both, the physical processes believed to be responsible for the jet launching and also the spatial configuration of the physical components of the jet source, the enigma of jet formation is not yet completely understood. On the theoretical side, there has been a substantial advancement during the last decade from purely station-ary models to time-dependent simulations lead by the vast increase of computer power. Observers, on the other hand, do not yet have the instruments at hand in order to spatially resolve observe the very jet origin. It can be expected that also the next years will yield a substantial improvement on both tracks of astrophysical research. Three-dimensional magnetohydrodynamic simu-lations will improve our understanding regarding the jet-disk interrelation and the time-dependent character of jet formation, the generation of the magnetic field in the jet source, and the interaction of the jet with the ambient medium. Another step will be the combina-tion of radiation transfer computations and magnetohydrodynamic simulations providing a direct link to the observations. At the same time, a new generation of telescopes (VLT, NGST) in combination with new instrumental techniques (IR-interferometry) will lead to a "quantum leap" in jet observation, as the resolution will then be sufficient in order to zoom into the innermost region of jet formation.
Das Therapiemanagement bei Lipödem stellt auf Grund unzureichenden Wissensstandes in entscheidenden Aspekten eine besondere Herausforderung dar. Da die Pathogenese der Erkrankung nicht hinreichend geklärt ist und bislang kein pathognomonisches Diagnostikkriterium definiert wurde, beklagen viele Betroffene einen langjährigen Leidensweg bis zur Einleitung von Therapiemaßnahmen. Durch Steigerung der Awareness der Erkrankung in den letzten Jahren konnten die Intervalle bis zur korrekten Diagnose erfreulicherweise erheblich verkürzt werden. Obwohl die Zuordnung der Beschwerden zu einer klar definierten Erkrankung für viele Patientinnen eine Erleichterung ist, stellt die Erkenntnis über begrenzte Therapiemöglichkeiten häufig eine neuerliche Belastung dar.
Als Konsequenz der ungeklärten Pathogenese konnte bis dato keine kausale Therapie für das Lipödem definiert werden. Zu Beginn waren die Möglichkeiten konservativer Behandlungsstrategien nur eingeschränkt in den Rahmen eines allgemeingültigen Konzeptes involviert und insbesondere Limitationen nicht klar definiert. Obwohl in diversen Bereichen der Therapie weiterhin keine ausreichende Evidenz besteht, konnten durch eine systematische Aufarbeitung die grundsätzlichen Behandlungsoptionen in Relation zueinander gesetzt werden. Betroffene Patientinnen, sowie die verschiedenen in die Behandlung integrierte medizinische Disziplinen verfügen somit über einen grundsätzlichen Handlungsalgorithmus, deren Empfehlungen über einfache Rezeptierung von Lymphdrainage und Kompressionsbekleidung hinausgehen. Durch kritische Reflexion der geltenden Dogmata wurde ein interdisziplinärer Leitfaden vorgeschlagen, der auf nachvollziehbare Weise im Sinne eines Stufenschemas alle wesentlichen Therapiesäulen in einen allgemeingültigen Behandlungsplan einbindet.
Im vielschichten Management der Erkrankung verbleibt die operative Behandlung, die Liposuktion, allerdings häufig als „ultima ratio“ nach ausbleibender Linderung unter konservativen Therapiemaßnahmen. Die wesentliche Zielstellung der vorliegenden Arbeit konzentriert sich demnach auf die Optimierung des operativen Vorgehens in der Durchführung von Liposuktionen bei Patientinnen mit Lipödem und zeigt sowohl Grenzen der Indikationsstellung, als auch Potenzial des Behandlungserfolges im Langzeitverlauf auf. Langzeitergebnisse zeigen, dass die Liposuktion als sicherer Eingriff mit dem Potenzial einer nachhaltigen Symptomreduktion für Lipödem-Patientinnen angesehen werden kann. Betont werden soll zudem die Notwendigkeit der Verzahnung operativer Maßnahmen mit konservativen Therapien und somit die Integration der Liposuktion als sinnvolle Behandlungsalternative in ein klar umrissenes Therapiekonzept.
Methodisch greift die Arbeit auf insgesamt 10 Publikationen zurück. Die hier postulierte mehrzeitige Megaliposuktion zur Therapie des Lipödems, mit summierten Gesamtaspirationsvolumina über alle Eingriffe von bis zu 66.000 ml, konnte als evidenzbasiertes Therapieverfahren bestätigt und validiert werden. Die beschriebenen niedrigen Komplikationsraten sind unter Anderem Resultat einer differenzierten, individualisierten perioperativen Strategie. Neben der Berücksichtigung grundsätzlicher methodischer Prinzipien existieren allerdings vielfältige Variationen, deren Implikationen auf Komplikationsraten jeweils differenziert zu betrachten sind. Es existiert zwar kein Konsensus für ein allgemeingültiges Standardverfahren der Liposuktion, allerdings konnten zahlreiche Elemente im perioperativen Management definiert werden, die unabhängig von der verwendeten Operationstechnik einen potenziellen positiven Einfluss auf das Outcome haben. Obwohl die Liposuktion bei Lipödem somit zusammenfassend mittlerweile als sicheres Verfahren gelten kann, sind einige Aspekte weiterhin nicht abschließend geklärt. Hierbei stehen vor allem das Volumenmanagement und die standardisierte Festlegung des maximalen Aspirationsvolumens im Fokus.
Die Analyse verschiedener Kovariablen auf die Linderung Lipödem-assoziierter Symptome nach Liposuktion zeigt, dass Alter, Body-Mass-Index (BMI) und präoperatives Stadium der Erkrankung einen signifikanten Einfluss auf das postoperative Ergebnis haben und in der Planung des mehrzeitigen operativen Vorgehens berücksichtigt werden müssen. BMI- oder körpergewichtsabhängige Zielgrößen der Absaugvolumina waren als Prognosefaktor für das postoperative Outcome dagegen nicht relevant. Inwieweit dies möglicherweise an der Überschreitung des „notwendigen“ Volumengrenzwerts für adäquate Symptomlinderung durch reguläre Durchführung von Megaliposuktionen liegen könnte, oder ob dieser Parameter tatsächlich keinen Einfluss auf das Ergebnis nach Operation besitzt, konnte nicht abschließend geklärt werden.
Weiterhin konnte ein positiver Nutzen auf assoziierte Begleiterkrankungen bei Lipödem nachgewiesen werden. Das Spektrum der Behandlungsmethoden kann durch reguläre Integration der Liposuktion in das Therapieschema somit um eine nachhaltige Alternative sinnvoll ergänzt werden. Im Unterschied zur alleinigen konservativen Therapie kann hierdurch ein wesentlicher Schritt weg von der alleinigen symptomatischen Therapie gemacht werden. Zudem die vielfältige Symptomatik der diversen assoziierten Komorbiditäten zu berücksichtigen. Als Konsequenz und für die Notwendigkeit eines ganzheitlichen, interdisziplinären Therapieansatzes wäre der Terminus „Lipödem-Syndrom“ möglicherweise treffender und wird zur Diskussion gestellt.
Für ein gesondertes Patientenklientel wurden zudem basale Grundsätze im perioperativen Vorgehen differenziert aufgearbeitet. Lipödem-Patientinnen mit begleitendem von-Willebrand-Syndrom stellen im Hinblick auf Blutungskomplikationen eine außerordentliche Herausforderung dar. Die vorliegenden evidenzbasierten Empfehlungen zum Therapiemanagement dieser Patientinnen bei Eingriffen ähnlicher Risikoklassifizierung wurden systematisch aufgearbeitet und in Bezug zu den speziellen Anforderungen bei Megaliposuktionen gebracht. Das dabei erarbeitete Therapieschema wird die präoperative Detektion von Koagulopathien im Allgemeinen, sowie die perioperative Komplikationsrate bei von-Willebrand-Patientinnen im Speziellen zukünftig erheblich verbessern.
Zusammenfassend konnte somit ein allgemeingültiger Algorithmus für die moderne und langfristig erfolgreiche Therapie von Lipödem-Patientinnen mit besonderem Fokus auf die Megaliposuktion erarbeitet werden. Bei adäquatem perioperativem Management und Berücksichtigung der großen Volumenverschiebungen kann der Eingriff komplikationsarm und sicher durchgeführt werden. Nicht abschließend geklärt ist derzeit die Pathophysiologie der Erkrankung wobei eine immunologische Genese sowie die primäre Pathologie des Lymphgefäßsystems bzw. der Fett(vorläufer)zellen als Erklärungmodelle favorisiert werden. Die Entwicklung diagnostischer Biomarker sollte dabei verfolgt werden.
Potentiality of nanosized materials has been largely proved but a closer look shows that a significant percentage of this research is related to oxides and metals, while the number drastically drops for metallic ceramics, namely transition metal nitrides and metal carbides. The lack of related publications do not reflect their potential but rather the difficulties related to their synthesis as dense and defect-free structures, fundamental prerequisites for advanced mechanical applications.
The present habilitation work aims to close the gap between preparation and processing, indicating novel synthetic pathways for a simpler and sustainable synthesis of transition metal nitride (MN) and carbide (MC) based nanostructures and easier processing thereafter. In spite of simplicity and reliability, the designed synthetic processes allow the production of functional materials, with the demanded size and morphology.
The goal was achieved exploiting classical and less-classical precursors, ranging from common metal salts and molecules (e.g. urea, gelatin, agar, etc), to more exotic materials, such as leafs, filter paper and even wood. It was found that the choice of precursors and reaction conditions makes it possible to control chemical composition (going for instance from metal oxides to metal oxy-nitrides to metal nitrides, or from metal nitrides to metal carbides, up to quaternary systems), size (from 5 to 50 nm) and morphology (going from mere spherical nanoparticles to rod-like shapes, fibers, layers, meso-porous and hierarchical structures, etc). The nature of the mixed precursors also allows the preparation of metal nitrides/carbides based nanocomposites, thus leading to multifunctional materials (e.g. MN/MC@C, MN/MC@PILs, etc) but also allowing dispersion in liquid media. Control over composition, size and morphology is obtained with simple adjustment of the main route, but also coupling it with processes such as electrospin, aerosol spray, bio-templating, etc. Last but not least, the nature of the precursor materials also allows easy processing, including printing, coating, casting, film and thin layers preparation, etc).
The designed routes are, concept-wise, similar and they all start by building up a secondary metal ion-N/C precursor network, which converts, upon heat treatment, into an intermediate “glass”. This glass stabilizes the nascent nanoparticles during their nucleation and impairs their uncontrolled growth during the heat treatment (scheme 1). This way, one of the main problems related to the synthesis of MN/MC, i.e. the need of very high temperature, could also be overcome (from up to 2000°C, for classical synthesis, down to 700°C in the present cases). The designed synthetic pathways are also conceived to allow usage of non-toxic compounds and to minimize (or even avoid) post-synthesis purification, still bringing to phase pure and well-defined (crystalline) nanoparticles.
This research aids to simplify the preparation of MN/MC, making these systems now readily available in suitable amounts both for fundamental and applied science. The prepared systems have been tested (in some cases for the first time) in many different fields, e.g. battery (MnN0.43@C shown a capacity stabilized at a value of 230 mAh/g, with coulombic efficiencies close to 100%), as alternative magnetic materials (Fe3C nanoparticles were prepared with different size and therefore different magnetic behavior, superparamagnetic or ferromagnetic, showing a saturation magnetization value up to 130 emu/g, i.e. similar to the value expected for the bulk material), as filters and for the degradation of organic dyes (outmatching the performance of carbon), as catalysts (both as active phase but also as active support, leading to high turnover rate and, more interesting, to tunable selectivity). Furthermore, with this route, it was possible to prepare for the first time, to the best of our knowledge, well-defined and crystalline MnN0.43, Fe3C and Zn1.7GeN1.8O nanoparticles via bottom-up approaches.
Once the synthesis of these materials can be made straightforward, any further modification, combination, manipulation, is in principle possible and new systems can be purposely conceived (e.g. hybrids, nanocomposites, ferrofluids, etc).
Phonology limited
(2007)
Phonology Limited is a study of the areas of phonology where the application of optimality theory (OT) has previously been problematic. Evidence from a wide variety of phenomena in a wide variety of languages is presented to show that interactions involving more than just faithfulness and markedness are best analyzed as involving language-specific morphological constraints rather than universal phonological constraints. OT has proved to be a highly insightful and successful theory of linguistics in general and phonology in particular, focusing as it does on surface forms and treating the relationship between inputs and outputs as a form of conflict resolution. Yet there have also been a number of serious problems with the approach that have led some detractors to argue that OT has failed as a theory of generative grammar. The most serious of these problems is opacity, defined as a state of affairs where the grammatical output of a given input appears to violate more constraints than an ungrammatical competitor. It is argued that these problems disappear once language-specific morphological constraints are allowed to play a significant role in analysis. Specifically, a number of processes of Tiberian Hebrew traditionally considered opaque are reexamined and shown to be straightforwardly transparent, but crucially involving morphological constraints on form, such as a constraint requiring certain morphological forms to end with a syllabic trochee, or a constraint requiring paradigm uniformity with regard to the occurrence of fricative allophones of stop phonemes. Language-specific morphological constraints are also shown to play a role in allomorphy, where a lexeme is associated with more than one input; the constraint hierarchy then decides which input is grammatical in which context. For example, [ɨ]/[ə] and [u]/[ə] alternation found in some lexemes but not in others in Welsh is attributed to the presence of two inputs for the lexemes with the alternation. A novel analysis of the initial consonant mutations of the modern Celtic languages argues that mutated forms are separately listed inputs chosen in appropriate contexts by constraints on morphology and syntax, rather than being outputs that are phonologically unfaithful to their unmutated inputs. Finally, static irregularities and lexical exceptions are examined and shown to be attributable to language-specific morphological constraints. In American English, the distribution of tense and lax vowels is predictable in several contexts; however, in some contexts, the distributions of tense [ɔ] vs. lax [a] and of tense [æ] vs. lax [æ] are not as expected. It is shown that clusters of output-output faithfulness constraints create a pattern to which words are attracted, which however violates general phonological considerations. New words that enter the language first obey the general phonological considerations before being attracted into the language-specific exceptional pattern.
The role played by azobenzene polymers in the modern photonic, electronic and opto-mechanical applications cannot be underestimated. These polymers are successfully used to produce alignment layers for liquid crystalline fluorescent polymers in the display and semiconductor technology, to build waveguides and waveguide couplers, as data storage media and as labels in quality product protection. A very hot topic in modern research are light-driven artificial muscles based on azobenzene elastomers. The incorporation of azobenzene chromophores into polymer systems via covalent bonding or even by blending gives rise to a number of unusual effects under visible (VIS) and ultraviolet light irradiation. The most amazing effect is the inscription of surface relief gratings (SRGs) onto thin azobenzene polymer films. At least seven models have been proposed to explain the origin of the inscribing force but none of them describes satisfactorily the light induced material transport on the molecular level. In most models, to explain the mass transport over micrometer distances during irradiation at room temperature, it is necessary to assume a considerable degree of photoinduced softening, at least comparable with that at the glass transition. Contrary to this assumption, we have gathered a convincing evidence that there is no considerable softening of the azobenzene layers under illumination. Presently we can surely say that light induced softening is a very weak accompanying effect rather than a necessary condition for the formation of SRGs. This means that the inscribing force should be above the yield point of the azobenzene polymer. Hence, an appropriate approach to describe the formation and relaxation of SRGs is a viscoplastic theory. It was used to reproduce pulse-like inscription of SRGs as measured by VIS light scattering. At longer inscription times the VIS scattering pattern exhibits some peculiarities which can be explained by the appearance of a density grating that will be shown to arise due to the final compressibility of the polymer film. As a logical consequence of the aforementioned research, a thermodynamic theory explaining the light-induced deformation of free standing films and the formation of SRGs is proposed. The basic idea of this theory is that under homogeneous illumination an initially isotropic sample should stretch itself along the polarization direction to compensate the entropy decrease produced by the photoinduced reorientation of azobenzene chromophores. Finally, some ideas about further development of this controversial topic will be discussed.
Es ist schon seit längerer Zeit bekannt, dass nach Kontakt des Biomaterials mit der biologischen Umgebung bei Implantation oder extrakorporaler Wechselwirkung zunächst Proteine aus dem umgebenden Milieu adsorbiert werden, wobei die Oberflächeneigenschaften des Materials die Zusammensetzung der Proteinschicht und die Konformation der darin enthaltenden Proteine determinieren. Die nachfolgende Wechselwirkung von Zellen mit dem Material wird deshalb i.d.R. von der Adsorbatschicht vermittelt. Der Einfluss der Oberflächen auf die Zusammensetzung und Konformation der Proteine und die nachfolgende Wechselwirkung mit Zellen ist von besonderem Interesse, da einerseits eine Aussage über die Anwendbarkeit ermöglicht wird, andererseits Erkenntnisse über diese Zusammenhänge für die Entwicklung neuer Materialien mit verbesserter Biokompatibilität genutzt werden können. In der vorliegenden Habilitationsschrift wurde deshalb der Einfluss der Zusammensetzung von Polymeren bzw. von deren Oberflächeneigenschaften auf die Adsorption von Proteinen, den Aktivitätszustand der plasmatischen Gerinnung und die Adhäsion von Zellen untersucht. Dabei wurden auch Möglichkeiten zur Beeinflussung dieser Vorgänge über eine Veränderung der Volumenzusammensetzung oder durch Oberflächenmodifikationen von Biomaterialien vorgestellt. Erkenntnisse aus diesen Arbeiten konnten für die Entwicklung von Membranen für Biohybrid-Organe genutzt werden.
Die interventionelle Behandlung des Vorhofflimmerns verursacht häufiger als in der Vergangenheit wahrgenommen eine Beeinträchtigung benachbarter Gewebe und Organe. Im Vordergrund der Betrachtungen dieser Arbeit stehen Schäden des Oesophagus, die aufgrund der schlechten Vorhersagbarkeit, des zeitlich verzögerten Auftretens und der fatalen Prognose bei Ausbildung einer atrio-oesophagealen Fistel besondere Relevanz haben.
Das Vorhofflimmern selbst ist nicht mit einer unmittelbaren vitalen Bedrohung verbunden, aber durch seine Komplikationen (z.B. Herzinsuffizienz, Schlaganfall) dennoch prognostisch relevant. Durch Antiarrhythmika gelingt keine Verbesserung der Rhythmuskontrolle (Arrhythmie-Freiheit), eine katheterinterventionelle Behandlung ist der medikamentösen Therapie überlegen. Durch eine frühzeitige und erfolgreiche Behandlung des Vorhofflimmerns konnte eine Verbesserung klinischer Endpunkte und der Prognose erreicht werden. Das Risiko einer invasiven Behandlung (insbesondere hinsichtlich des Auftretens prognoserelevanter Komplikationen) muss jedoch bei der Indikationsstellung und der Prozedur-Durchführung bedacht und gegenüber den günstigen Effekten der Behandlung abgewogen werden.
Untersuchungen zur Vermeidung der sehr seltenen atrio-oesophagealen Fisteln bedienen sich Surrogat-Parametern, hier bisher ausschließlich den ablationsinduzierten Schleimhaut-Läsionen des Oesophagus. Die Untersuchungen dieser Arbeit zeigen ein komplexeres Bild der (peri)-oesophagealen Schädigungen nach Vorhofflimmern-Ablation mit thermischen Energiequellen.
(1) Neue Definition der Oesophagus-Schäden: Oesophageale und perioesophageale Beeinträchtigungen treten sehr häufig auf (nach der hier verwendeten erweiterten Definition bei zwei Drittel der Patienten) und sind unabhängig von der verwendeten Ablationsenergie. Unterschiede finden sich in den Manifestationen der Oesophagus-Schäden für die verschiedenen Energie-Protokolle, ohne dass der Mechanismus hierfür aufgeklärt ist. Diese Arbeit beschreibt die unterschiedlichen Ausprägungen thermischer Oesophagus-Schäden, deren Determinanten und pathophysiologische Relevanz.
(2) Die Detektion (zum Teil subtiler) Oesophagus-Schäden ist maßgeblich von der Intensität der Nachsorge abhängig. Eine Beschränkung auf subjektive Schilderungen (z.B. Schmerzen beim Schluckakt, Sodbrennen) ist irreführend, die Mehrzahl der Veränderungen bleibt asymptomatisch, die Symptome der ausgebildeten atrio-oesophagealen Fistel (meist nach mehreren Wochen) bereits mit einer sehr schlechten Prognose belastet. Eine Endoskopie der Speiseröhre findet in den meisten elektrophysiologischen Zentren nicht oder nur bei anhaltenden Symptomen statt und kann ausschließlich Mukosa-Läsionen nachweisen. Damit wird das Ausmaß des oesophagealen und perioesophagealen Schadens bei Weitem unterschätzt. Veränderungen des perioesophagealen Raums, deren klinische Relevanz (noch) unklar ist, werden nicht erfasst, und damit ein Wandödem und Schäden im Gewebe zwischen linkem Vorhof und Speiseröhre (einschl. Nerven und Gefäßen) ignoriert.
Die Studien tragen auch zur Neubewertung etablierter Messgrößen und Risikofaktoren der Oesophagus-Schäden bei.
(3) Das Temperaturmonitoring im Oesophagus anhand der Maximalabweichungen ist erst für Extremwerte aussagekräftig und dadurch nicht hilfreich, Oesophagus-Läsionen zu vermeiden. Die komplexe Analyse der Temperatur-Rohdaten (bisher nur offline möglich) liefert in der AUC für RF-Ablationen einen prädiktiven Parameter für Oesophagus-Schäden, der eine Strukturierung der weiteren endoskopischen Diagnostik erlaubt. Ein vergleich¬barer Wert für die Cryoablationen konnte in den Analysen nicht gefunden werden.
(4) Eine chronische Entzündung des unteren Oesophagus-Drittels behindert nicht nur das Abheilen einer thermischen Oesophagus-Läsion, sondern kann das Auftreten solcher Läsionen durch die Ablation begünstigen. Die große Zahl vorbestehender Oesophagus-Veränderungen, die eine erhöhte Vulnerabilität anzeigen, und die Bedeutung für die Ent¬stehung thermischer Läsionen können der Ansatzpunkt präventiver Maßnahmen sein.
Ergänzend werden Ausprägungen der Oesophagus-Schäden durch umfangreiche Diagnostik erfasst und beschrieben, die aus pathophysiologischen Überlegungen relevant sein können.
(5) Die systematische Erweiterung der bildgebenden Diagnostik auf den perioesophagealen Raum durch Endosonographie zeigte, dass Schleimhaut-Läsionen alleine nur einen geringen Teil der Oesophagus-Schäden darstellen. Schleimhaut-Läsionen infolge einer instrumentellen Verletzung sind nicht mit dem Risiko der Ausbildung einer atrio-oesophagealen Fistel verbunden und unterstreichen die pathophysiologische Relevanz der perioesophagealen Veränderungen.
(6) Eine funktionelle Diagnostik thermischer Schäden des perioesophagealen Vagus-Plexus identifiziert Patienten mit Oesophagus-Schäden, die bildgebend nicht erfasst wurden, jedoch in ihren Auswirkungen (Nahrungsretention und gastro-oesophagealer Reflux) zur Läsionsprogression beitragen können.
In this thesis, a collection of studies is presented that advance research on complex food webs in several directions. Food webs, as the networks of predator-prey interactions in ecosystems, are responsible for distributing the resources every organism needs to stay alive. They are thus central to our understanding of the mechanisms that support biodiversity, which in the face of increasing severity of anthropogenic global change and accelerated species loss is of highest importance, not least for our own well-being.
The studies in the first part of the thesis are concerned with general mechanisms that determine the structure and stability of food webs. It is shown how the allometric scaling of metabolic rates with the species' body masses supports their persistence in size-structured food webs (where predators are larger than their prey), and how this interacts with the adaptive adjustment of foraging efforts by consumer species to create stable food webs with a large number of coexisting species. The importance of the master trait body mass for structuring communities is further exemplified by demonstrating that the specific way the body masses of species engaging in empirically documented predator-prey interactions affect the predator's feeding rate dampens population oscillations, thereby helping both species to survive. In the first part of the thesis it is also shown that in order to understand certain phenomena of population dynamics, it may be necessary to not only take the interactions of a focal species with other species into account, but to also consider the internal structure of the population. This can refer for example to different abundances of age cohorts or developmental stages, or the way individuals of different age or stage interact with other species.
Building on these general insights, the second part of the thesis is devoted to exploring the consequences of anthropogenic global change on the persistence of species. It is first shown that warming decreases diversity in size-structured food webs. This is due to starvation of large predators on higher trophic levels, which suffer from a mismatch between their respiration and ingestion rates when temperature increases. In host-parasitoid networks, which are not size-structured, warming does not have these negative effects, but eutrophication destabilises the systems by inducing detrimental population oscillations. In further studies, the effect of habitat change is addressed. On the level of individual patches, increasing isolation of habitat patches has a similar effect as warming, as it leads to decreasing diversity due to the extinction of predators on higher trophic levels. In this case it is caused by dispersal mortality of smaller and therefore less mobile species on lower trophic levels, meaning that an increasing fraction of their biomass production is lost to the inhospitable matrix surrounding the habitat patches as they become more isolated. It is further shown that increasing habitat isolation desynchronises population oscillations between the patches, which in itself helps species to persist by dampening fluctuations on the landscape level. However, this is counteracted by an increasing strength of local population oscillations fuelled by an indirect effect of dispersal mortality on the feeding interactions. Last, a study is presented that introduces a novel mechanism for supporting diversity in metacommunities. It builds on the self-organised formation of spatial biomass patterns in the landscape, which leads to the emergence of spatio-temporally varying selection pressures that keep local communities permanently out of equilibrium and force them to continuously adapt. Because this mechanism relies on the spatial extension of the metacommunity, it is also sensitive to habitat change.
In the third part of the thesis, the consequences of biodiversity for the functioning of ecosystems are explored. The studies focus on standing stock biomass, biomass production, and trophic transfer efficiency as ecosystem functions. It is first shown that increasing the diversity of animal communities increases the total rate of intra-guild predation. However, the total biomass stock of the animal communities increases nevertheless, which also increases their exploitative pressure on the underlying plant communities. Despite this, the plant communities can maintain their standing stock biomass due to a shift of the body size spectra of both animal and plant communities towards larger species with a lower specific respiration rate. In another study it is further demonstrated that the generally positive relationship between diversity and the above mentioned ecosystem functions becomes steeper when not only the feeding interactions but also the numerous non-trophic interactions (like predator interference or competition for space) between the species of an ecosystem are taken into account. Finally, two studies are presented that demonstrate the power of functional diversity as explanatory variable. It is interpreted as the range spanned by functional traits of the species that determine their interactions. This approach allows to mechanistically understand how the ecosystem functioning of food webs with multiple trophic levels is affected by all parts of the food web and why a high functional diversity is required for efficient transportation of energy from primary producers to the top predators.
The general discussion draws some synthesising conclusions, e.g. on the predictive power of ecosystem functioning to explain diversity, and provides an outlook on future research directions.
Ecce figura
(2023)
Worüber wir reden, wenn wir von Figuren reden, ist eine komplexe Fragestellung, die unterschiedliche Disziplinen berührt. Mit Erich Auerbachs figura/Mimesis-Projekt wurde die interdiszplinäre Forschung dieses Begriffs initiiert. Ob Literatur-, Bild- oder Wissensgeschichte – die Präsenz und Aktualität von figura in der romanistischen und komparatistischen Forschung bezeugt ein anhaltendes Interesse an der Theoriearbeit zwischen Theologie, Philosophie, Literatur- und Kunstwissenschaft. Allerdings fehlt bislang eine grundlegende methodologische Reflexion, die die interdisziplinären Aspekte gleichrangig berücksichtigt und zu einer gemeinsamen Arbeit am Begriff vereinigt.
Dieses Versäumnis zu beheben, ist Aufgabe der vorliegenden Arbeit. Ausgehend von Erich Auerbach, Walter Benjamin und Hannah Arendt verfolgt die Monographie in vergleichenden Konstellationen von der Antike bis in die Moderne die literatur- und kunsthistorischen, theologischen und philosophischen Spuren von figura, die zu einer Methode der literaturphilosophischen Figuralogie ausgebaut werden.
Ecce figura versteht sich als ein Kompendium interdisziplinärer Begriffsgeschichte zwischen Literatur, Philosophie und Theologie, das dazu einlädt, in neuen Konstellationen gelesen und erweitert zu werden.
Earthquake faults interact with each other in many different ways and hence earthquakes cannot be treated as individual independent events. Although earthquake interactions generally lead to a complex evolution of the crustal stress field, it does not necessarily mean that the earthquake occurrence becomes random and completely unpredictable. In particular, the interplay between earthquakes can rather explain the occurrence of pronounced characteristics such as periods of accelerated and depressed seismicity (seismic quiescence) as well as spatiotemporal earthquake clustering (swarms and aftershock sequences). Ignoring the time-dependence of the process by looking at time-averaged values – as largely done in standard procedures of seismic hazard assessment – can thus lead to erroneous estimations not only of the activity level of future earthquakes but also of their spatial distribution. Therefore, it exists an urgent need for applicable time-dependent models. In my work, I aimed at better understanding and characterization of the earthquake interactions in order to improve seismic hazard estimations. For this purpose, I studied seismicity patterns on spatial scales ranging from hydraulic fracture experiments (meter to kilometer) to fault system size (hundreds of kilometers), while the temporal scale of interest varied from the immediate aftershock activity (minutes to months) to seismic cycles (tens to thousands of years). My studies revealed a number of new characteristics of fluid-induced and stress-triggered earthquake clustering as well as precursory phenomena in earthquake cycles. Data analysis of earthquake and deformation data were accompanied by statistical and physics-based model simulations which allow a better understanding of the role of structural heterogeneities, stress changes, afterslip and fluid flow. Finally, new strategies and methods have been developed and tested which help to improve seismic hazard estimations by taking the time-dependence of the earthquake process appropriately into account.
We theoretically discuss the interaction of neutral particles (atoms, molecules) with surfaces in the regime where it is mediated by the electromagnetic field. A thorough characterization of the field at sub-wavelength distances is worked out, including energy density spectra and coherence functions. The results are applied to typical situations in integrated atom optics, where ultracold atoms are coupled to a thermal surface, and to single molecule probes in near field optics, where sub-wavelength resolution can be achieved.
Weltweit sind fast 40 % der Bevölkerung übergewichtig und die Prävalenz von Adipositas, Insulinresistenz und den resultierenden Folgeerkrankungen wie dem Metabolischen Syndrom und Typ-2-Diabetes steigt rapide an. Als häufigste Ursachen werden diätetisches Fehlverhalten und mangelnde Bewegung angesehen. Die nicht-alkoholische Fettlebererkrankung (NAFLD), deren Hauptcharakteristikum die exzessive Akkumulation von Lipiden in der Leber ist, korreliert mit dem Body Mass Index (BMI). NAFLD wird als hepatische Manifestation des Metabolischen Syndroms angesehen und ist inzwischen die häufigste Ursache für Leberfunktionsstörungen. Die Erkrankung umfasst sowohl die benigne hepatische Steatose (Fettleber) als auch die progressive Form der nicht-alkoholischen Steatohepatitis (NASH), bei der die Steatose von Entzündung und Fibrose begleitet ist. Die Ausbildung einer NASH erhöht das Risiko, ein hepatozelluläres Karzinom (HCC) zu entwickeln und kann zu irreversibler Leberzirrhose und terminalem Organversagen führen. Nahrungsbestandteile wie Cholesterol und Fett-reiche Diäten werden als mögliche Faktoren diskutiert, die den Übergang einer einfachen Fettleber zur schweren Verlaufsform der Steatohepatitis / NASH begünstigen. Eine Ausdehnung des Fettgewebes wird von Insulinresistenz und einer niedrig-gradigen chronischen Entzündung des Fettgewebes begleitet. Neben Endotoxinen aus dem Darm gelangen Entzündungsmediatoren aus dem Fettgewebe zur Leber. Als Folge werden residente Makrophagen der Leber, die Kupfferzellen, aktiviert, die eine Entzündungsantwort initiieren und weitere pro-inflammatorische Mediatoren freisetzen, zu denen Chemokine, Cytokine und Prostanoide wie Prostaglandin E2 (PGE2) gehören. In dieser Arbeit soll aufgeklärt werden, welchen Beitrag PGE2 an der Ausbildung von Insulinresistenz, hepatischer Steatose und Entzündung im Rahmen von Diät-induzierter NASH im komplexen Zusammenspiel mit der Regulation der Cytokin-Produktion und anderen Co-Faktoren wie Hyperinsulinämie und Hyperlipidämie hat. In murinen und humanen Makrophagen-Populationen wurde untersucht, welche Faktoren die Bildung von PGE2 fördern und wie PGE2 die Entzündungsantwort aktivierter Makrophagen reguliert. In primären Hepatozyten der Ratte sowie in isolierten humanen Hepatozyten und Zelllinien wurde der Einfluss von PGE2 allein und in Kombination mit Cytokinen, deren Bildung durch PGE2 beeinflusst werden kann, auf die Insulin-abhängige Regulation des Glucose- und Lipid-stoffwechsels untersucht. Um den Einfluss von PGE2 im komplexen Zusammenspiel der Zelltypen in der Leber und im Gesamtorganismus zu erfassen, wurden Mäuse, in denen die PGE2-Synthese durch die Deletion der mikrosomalen PGE-Synthase 1 (mPGES1) vermindert war, mit einer NASH-induzierenden Diät gefüttert. In Lebern von Patienten mit NASH oder in Mäusen mit Diät-induzierter NASH war die Expression der PGE2-synthetisierenden Enzyme Cyclooxygenase 2 (COX2) und mPGES1 sowie die Bildung von PGE2 im Vergleich zu gesunden Kontrollen gesteigert und korrelierte mit dem Schweregrad der Lebererkrankung. In primären Makrophagen aus den Spezies Mensch, Maus und Ratte sowie in humanen Makrophagen-Zelllinien war die Bildung pro-inflammatorischer Mediatoren wie Chemokinen, Cytokinen und Prostaglandinen wie PGE2 verstärkt, wenn die Zellen mit Endotoxinen wie Lipopolysaccharid (LPS), Fettsäuren wie Palmitinsäure, Cholesterol und Cholesterol-Kristallen oder Insulin, das als Folge der kompensatorischen Hyperinsulinämie bei Insulinresistenz verstärkt freigesetzt wird, inkubiert wurden. Insulin steigerte dabei synergistisch mit LPS oder Palmitinsäure die Synthese von PGE2 sowie der anderen Entzündungsmediatoren wie Interleukin (IL) 8 und IL-1β. PGE2 reguliert die Entzündungsantwort: Neben der Induktion der eigenen Synthese-Enzyme verstärkte PGE2 die Expression der Immunzell-rekrutierenden Chemokine IL-8 und (C-C-Motiv)-Ligand 2 (CCL2) sowie die der pro-inflammatorischen Cytokine IL-1β und IL-6 in Makrophagen und kann so zur Verstärkung der Entzündungsreaktion beitragen. Außerdem förderte PGE2 die Bildung von Oncostatin M (OSM) und OSM induzierte in einer positiven Rückkopplungsschleife die Expression der PGE2-synthetisierenden Enzyme. Andererseits hemmte PGE2 die basale und LPS-vermittelte Bildung des potenten pro-inflammatorischen Cytokins Tumornekrosefaktor α (TNFα) und kann so die Entzündungsreaktion abschwächen. In primären Hepatozyten der Ratte und humanen Hepatozyten beeinträchtigte PGE2 direkt die Insulin-abhängige Aktivierung der Insulinrezeptor-Signalkette zur Steigerung der Glucose-Verwertung, in dem es durch Signalketten, die den verschiedenen PGE2-Rezeptoren nachgeschaltet sind, Kinasen wie ERK1/2 und IKKβ aktivierte und eine inhibierende Serin-Phosphorylierung der Insulinrezeptorsubstrate bewirkte. PGE2 verstärkte außerdem die IL-6- oder OSM-vermittelte Insulinresistenz und Steatose in primären Hepatozyten der Ratte. Die Wirkung von PGE2 im Gesamtorganismus sollte in Mäusen mit Diät-induzierter NASH untersucht werden. Die Fütterung einer Hochfett-Diät mit Schmalz als Fettquelle, das vor allem gesättigte Fettsäuren enthält, verursachte Fettleibigkeit, Insulinresistenz und eine hepatische Steatose in Wildtyp-Mäusen. In Tieren, die eine Hochfett-Diät mit Sojaöl als Fettquelle, das vor allem (ω-6)-mehrfach-ungesättigte Fettsäuren (PUFAs) enthält, oder eine Niedrigfett-Diät mit Cholesterol erhielten, war lediglich eine hepatische Steatose nachweisbar, jedoch keine verstärkte Gewichtszunahme im Vergleich zu Geschwistertieren, die eine Standard-Diät bekamen. Im Gegensatz dazu verursachte die Fütterung einer Hochfett-Diät mit PUFA-reichem Sojaöl als Fettquelle in Kombination mit Cholesterol sowohl Fettleibigkeit und Insulinresistenz als auch hepatische Steatose mit Hepatozyten-Hypertrophie, lobulärer Entzündung und beginnender Fibrose in Wildtyp-Mäusen. Diese Tiere spiegelten alle klinischen und histologischen Parameter der humanen NASH im Metabolischen Syndrom wider. Nur die Kombination von hohen Mengen ungesättigter Fettsäuren aus Sojaöl und Cholesterol in der Nahrung führte zu einer exzessiven Akkumulation des Cholesterols und der Bildung von Cholesterol-Kristallen in den Hepatozyten, die zur Schädigung der Mitochondrien, schwerem oxidativem Stress und schließlich zum Absterben der Zellen führten. Als Konsequenz phagozytieren Kupfferzellen die Zelltrümmer der Cholesterol-überladenen Hepatozyten, werden dadurch aktiviert, setzen Chemokine, Cytokine und PGE2 frei, die die Entzündungsreaktion verstärken und die Infiltration von weiteren Immunzellen initiieren können und verursachen so eine Progression zur Steatohepatitis (NASH). Die Deletion der mikrosomalen PGE-Synthase 1 (mPGES1), dem induzierbaren Enzym der PGE2-Synthese aus Cyclooxygenase-abhängigen Vorstufen, reduzierte die Diät-abhängige Bildung von PGE2 in der Leber. Die Fütterung der NASH-induzierenden Diät verursachte in Wildtyp- und mPGES1-defizienten Mäusen eine ähnliche Fettleibigkeit und Zunahme der Fettmasse sowie die Ausbildung von hepatischer Steatose mit Entzündung und Fibrose (NASH) im histologischen Bild. In mPGES1-defizienten Mäusen waren jedoch Parameter für die Infiltration von Entzündungszellen und die Diät-abhängige Schädigung der Leber im Vergleich zu Wildtyp-Tieren erhöht, was sich auch in einer stärkeren Diät-induzierten systemischen Insulinresistenz widerspiegelte. Die Bildung des pro-inflammatorischen und pro-apoptotischen Cytokins TNFα war in mPGES1-defizienten Mäusen durch die Aufhebung der negativen Rückkopplungshemmung verstärkt, was einen gesteigerten Diät-induzierten Zelluntergang gestresster Lipid-überladener Hepatozyten und eine nach-geschaltete Entzündungsantwort zur Folge hatte. Zusammenfassend wurde unter den gewählten Versuchsbedingungen in vivo eine anti-inflammatorische Rolle von PGE2 verifiziert, da das Prostanoid vor allem indirekt durch die Hemmung der TNFα-vermittelten Entzündungsreaktion die Schädigung der Leber, die Verstärkung der Entzündung und die Ausbildung von Insulinresistenz im Rahmen der Diät-abhängigen Fettlebererkrankung abschwächte.
This book is concerned with the diachronic development of selected topic and focus markers in Spanish, Portuguese and French. On the one hand, it focuses on the development of these structures from their relational meaning to their topic-/ focus-marking meaning, and on the other hand, it is concerned with their current form und use. Thus, Romance topic and focus markers – such as sp. en cuanto a, pt. a propósito de, fr. au niveau de or sentence-initial sp. Lo que as well as clefts and pseudo-clefts – are investigated from a quantitative and qualitative perspective. The author argues that topic markers have procedural meaning and that their function is bound to their syntactic position. An important contribution of this study is the fact that real linguistic evidence (in the form of data from various corpora) is analyzed instead of operating with constructed examples.
This habilitation thesis includes seven case studies that examine climate variability during the past 3.5 million years from different temporal and spatial perspectives. The main geographical focus is on the climatic events of the of the African and Asian monsoonal system, the North Atlantic as well as the Arctic Ocean. The results of this study are based on marine and terrestrial climate archives obtained by sedimentological and geochemical methods, and subsequently analyzed by various statistical methods.
The results herein presented results provide a picture of the climatic background conditions of past cold and warm periods, the sensitivity of past climatic climate phases in relation to changes in the atmospheric carbon dioxide content, and the tight linkage between the low and high latitude climate system. Based on the results, it is concluded that a warm background climate state strongly influenced and/or partially reversed the linear relationships between individual climate processes that are valid today. Also, the driving force of the low latitudes for climate variability of the high latitudes is emphasized in the present work, which is contrary to the conventional view that the global climate change of the past 3.5 million years was predominantly controlled by the high latitude climate variability. Furthermore, it is found that on long geologic time scales (>1000 years to millions of years), solar irradiance variability due to changes in the Earth-Sun-Moon System may have increased the sensitivity of low and high latitudes to Influenced changes in atmospheric carbon dioxide.
Taken together, these findings provide new insights into the sensitivity of past climate phases and provide new background conditions for numerical models, that predict future climate change.
Bischöfe im Frankenreich waren einflussreiche politische Akteure, die im Laufe des 9. Jahrhunderts ein gelehrtes Wissen vom eigenen Amt entwickelten. Spiegelungen dieses Wissens über das Wesen des Bischofsamtes finden sich in vielen Texten, die meisten stammen aus Westfranken. Offen ist bislang jedoch, welche Relevanz dieses Wissen und das bischöfliche Standesbewusstsein hatten – ist es als normativer Referenzrahmen von anderen politisch relevanten Ständen anerkannt? Wie entwickelt es sich über die Umbruchzeit des 10. Jahrhunderts in der post-karolingischen Zeit und beginnenden Kirchenreform? Diesen Fragen widmet sich das Buch durch eine Untersuchung von Bischofsabsetzungen in Westfranken im 9. und 10. Jahrhundert und durch eine Analyse des Bischofsbildes in monastischen wie bischöflichen Kreisen im 10. und frühen 11. Jahrhundert. So kann ein differenziertes Bild der Wahrnehmung des Bischofsamtes und der konkrete Umgang mit dem Wissen vom Bischofsamt in verschiedenen Kontexten gezeichnet werden.
Understanding the formation of stars in galaxies is central to much of modern astrophysics. For several decades it has been thought that the star formation process is primarily controlled by the interplay between gravity and magnetostatic support, modulated by neutral-ion drift. Recently, however, both observational and numerical work has begun to suggest that supersonic interstellar turbulence rather than magnetic fields controls star formation. This review begins with a historical overview of the successes and problems of both the classical dynamical theory of star formation, and the standard theory of magnetostatic support from both observational and theoretical perspectives. We then present the outline of a new paradigm of star formation based on the interplay between supersonic turbulence and self-gravity. Supersonic turbulence can provide support against gravitational collapse on global scales, while at the same time it produces localized density enhancements that allow for collapse on small scales. The efficiency and timescale of stellar birth in Galactic gas clouds strongly depend on the properties of the interstellar turbulent velocity field, with slow, inefficient, isolated star formation being a hallmark of turbulent support, and fast, efficient, clustered star formation occurring in its absence. After discussing in detail various theoretical aspects of supersonic turbulence in compressible self-gravitating gaseous media relevant for star forming interstellar clouds, we explore the consequences of the new theory for both local star formation and galactic scale star formation. The theory predicts that individual star-forming cores are likely not quasi-static objects, but dynamically evolving. Accretion onto these objects will vary with time and depend on the properties of the surrounding turbulent flow. This has important consequences for the resulting stellar mass function. Star formation on scales of galaxies as a whole is expected to be controlled by the balance between gravity and turbulence, just like star formation on scales of individual interstellar gas clouds, but may be modulated by additional effects like cooling and differential rotation. The dominant mechanism for driving interstellar turbulence in star-forming regions of galactic disks appears to be supernovae explosions. In the outer disk of our Milky Way or in low-surface brightness galaxies the coupling of rotation to the gas through magnetic fields or gravity may become important.
Computational cosmology
(2008)
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.
Quantitative thermodynamic and geochemical modeling is today applied in a variety of geological environments from the petrogenesis of igneous rocks to the oceanic realm. Thermodynamic calculations are used, for example, to get better insight into lithosphere dynamics, to constrain melting processes in crust and mantle as well as to study fluid-rock interaction. The development of thermodynamic databases and computer programs to calculate equilibrium phase diagrams have greatly advanced our ability to model geodynamic processes from subduction to orogenesis. However, a well-known problem is that despite its broad application the use and interpretation of thermodynamic models applied to natural rocks is far from straightforward. For example, chemical disequilibrium and/or unknown rock properties, such as fluid activities, complicate the application of equilibrium thermodynamics.
One major aspect of the publications presented in this Habilitationsschrift are new approaches to unravel dynamic and chemical histories of rocks that include applications to chemically open system behaviour. This approach is especially important in rocks that are affected by element fractionation due to fractional crystallisation and fluid loss during dehydration reactions. Furthermore, chemically open system behaviour has also to be considered for studying fluid-rock interaction processes and for extracting information from compositionally zoned metamorphic minerals. In this Habilitationsschrift several publications are presented where I incorporate such open system behaviour in the forward models by incrementing the calculations and considering changing reacting rock compositions during metamorphism. I apply thermodynamic forward modelling incorporating the effects of element fractionation in a variety of geodynamic and geochemical applications in order to better understand lithosphere dynamics and mass transfer in solid rocks.
In three of the presented publications I combine thermodynamic forward models with trace element calculations in order to enlarge the application of geochemical numerical forward modeling. In these publications a combination of thermodynamic and trace element forward modeling is used to study and quantify processes in metamorphic petrology at spatial scales from µm to km. In the thermodynamic forward models I utilize Gibbs energy minimization to quantify mineralogical changes along a reaction path of a chemically open fluid/rock system. These results are combined with mass balanced trace element calculations to determine the trace element distribution between rock and melt/fluid during the metamorphic evolution. Thus, effects of mineral reactions, fluid-rock interaction and element transport in metamorphic rocks on the trace element and isotopic composition of minerals, rocks and percolating fluids or melts can be predicted.
One of the included publications shows that trace element growth zonations in metamorphic garnet porphyroblasts can be used to get crucial information about the reaction path of the investigated sample. In order to interpret the major and trace element distribution and zoning patterns in terms of the reaction history of the samples, we combined thermodynamic forward models with mass-balance rare earth element calculations. Such combined thermodynamic and mass-balance calculations of the rare earth element distribution among the modelled stable phases yielded characteristic zonation patterns in garnet that closely resemble those in the natural samples. We can show in that paper that garnet growth and trace element incorporation occurred in near thermodynamic equilibrium with matrix phases during subduction and that the rare earth element patterns in garnet exhibit distinct enrichment zones that fingerprint the minerals involved in the garnet-forming reactions.
In two of the presented publications I illustrate the capacities of combined thermodynamic-geochemical modeling based on examples relevant to mass transfer in subduction zones. The first example focuses on fluid-rock interaction in and around a blueschist-facies shear zone in felsic gneisses, where fluid-induced mineral reactions and their effects on boron (B) concentrations and isotopic compositions in white mica are modeled. In the second example, fluid release from a subducted slab and associated transport of B and variations in B concentrations and isotopic compositions in liberated fluids and residual rocks are modeled. I show that, combined with experimental data on elemental partitioning and isotopic fractionation, thermodynamic forward modeling unfolds enormous capacities that are far from exhausted.
In my publications presented in this Habilitationsschrift I compare the modeled results to geochemical data of natural minerals and rocks and demonstrate that the combination of thermodynamic and geochemical models enables quantification of metamorphic processes and insights into element cycling that would have been unattainable so far.
Thus, the contributions to the science community presented in this Habilitatonsschrift concern the fields of petrology, geochemistry, geochronology but also ore geology that all use thermodynamic and geochemical models to solve various problems related to geo-materials.
The uptake of nutrients and their subsequent chemical conversion by reactions which provide energy and building blocks for growth and propagation is a fundamental property of life. This property is termed metabolism. In the course of evolution life has been dependent on chemical reactions which generate molecules that are common and indispensable to all life forms. These molecules are the so-called primary metabolites. In addition, life has evolved highly diverse biochemical reactions. These reactions allow organisms to produce unique molecules, the so-called secondary metabolites, which provide a competitive advantage for survival. The sum of all metabolites produced by the complex network of reactions within an organism has since 1998 been called the metabolome. The size of the metabolome can only be estimated and may range from less than 1,000 metabolites in unicellular organisms to approximately 200,000 in the whole plant kingdom. In current biology, three additional types of molecules are thought to be important to the understanding of the phenomena of life: (1) the proteins, in other words the proteome, including enzymes which perform the metabolic reactions, (2) the ribonucleic acids (RNAs) which constitute the so-called transcriptome, and (3) all genes of the genome which are encoded within the double strands of desoxyribonucleic acid (DNA). Investigations of each of these molecular levels of life require analytical technologies which should best enable the comprehensive analysis of all proteins, RNAs, et cetera. At the beginning of this thesis such analytical technologies were available for DNA, RNA and proteins, but not for metabolites. Therefore, this thesis was dedicated to the implementation of the gas chromatography – mass spectrometry technology, in short GC-MS, for the in-parallel analysis of as many metabolites as possible. Today GC-MS is one of the most widely applied technologies and indispensable for the efficient profiling of primary metabolites. The main achievements and research topics of this work can be divided into technological advances and novel insights into the metabolic mechanisms which allow plants to cope with environmental stresses. Firstly, the GC-MS profiling technology has been highly automated and standardized. The major technological achievements were (1) substantial contributions to the development of automated and, within the limits of GC-MS, comprehensive chemical analysis, (2) contributions to the implementation of time of flight mass spectrometry for GC-MS based metabolite profiling, (3) the creation of a software platform for reproducible GC-MS data processing, named TagFinder, and (4) the establishment of an internationally coordinated library of mass spectra which allows the identification of metabolites in diverse and complex biological samples. In addition, the Golm Metabolome Database (GMD) has been initiated to harbor this library and to cope with the increasing amount of generated profiling data. This database makes publicly available all chemical information essential for GC-MS profiling and has been extended to a global resource of GC-MS based metabolite profiles. Querying the concentration changes of hundreds of known and yet non-identified metabolites has recently been enabled by uploading standardized, TagFinder-processed data. Long-term technological aims have been pursued with the central aims (1) to enhance the precision of absolute and relative quantification and (2) to enable the combined analysis of metabolite concentrations and metabolic flux. In contrast to concentrations which provide information on metabolite amounts, flux analysis provides information on the speed of biochemical reactions or reaction sequences, for example on the rate of CO2 conversion into metabolites. This conversion is an essential function of plants which is the basis of life on earth. Secondly, GC-MS based metabolite profiling technology has been continuously applied to advance plant stress physiology. These efforts have yielded a detailed description of and new functional insights into metabolic changes in response to high and low temperatures as well as common and divergent responses to salt stress among higher plants, such as Arabidopsis thaliana, Lotus japonicus and rice (Oryza sativa). Time course analysis after temperature stress and investigations into salt dosage responses indicated that metabolism changed in a gradual manner rather than by stepwise transitions between fixed states. In agreement with these observations, metabolite profiles of the model plant Lotus japonicus, when exposed to increased soil salinity, were demonstrated to have a highly predictive power for both NaCl accumulation and plant biomass. Thus, it may be possible to use GC-MS based metabolite profiling as a breeding tool to support the selection of individual plants that cope best with salt stress or other environmental challenges.
Immobilisierung bzw. Mobilisierung und Transport von Schadstoffen in der Umwelt, besonders in den Kompartimenten Boden und Wasser, sind von fundamentaler Bedeutung für unser (Über)Leben auf der Erde. Einer der Hauptreaktionspartner für organische und anorganische Schadstoffe (Xenobiotika) in der Umwelt sind Huminstoffe (HS). HS sind Abbauprodukte pflanzlichen und tierischen Gewebes, die durch eine Kombination von chemischen und biologischen Ab- und Umbauprozessen entstehen. Bedingt durch ihre Genese stellen HS außerordentlich heterogene Stoffsysteme dar, die eine Palette von verschiedenartigen Wechselwirkungen mit Schadstoffen zeigen. Die Untersuchung der fundamentalen Wechselwirkungsmechanismen stellt ebenso wie deren quantitative Beschreibung höchste Anforderungen an die Untersuchungsmethoden. Zur qualitativen und quantitativen Charakterisierung der Wechselwirkungen zwischen HS und Xenobiotika werden demnach analytische Methoden benötigt, die bei der Untersuchung von extrem heterogenen Systemen aussagekräftige Daten zu liefern vermögen. Besonders spektroskopische Verfahren, wie z.B. lumineszenz-basierte Verfahren, besitzen neben der hervorragenden Selektivität und Sensitivität, auch eine Multidimensionalität (bei der Lumineszenz sind es die Beobachtungsgrößen Intensität IF, Anregungswellenlänge lex, Emissionswellenlänge lem und Fluoreszenzabklingzeit tF), die es gestattet, auch heterogene Systeme wie HS direkt zu untersuchen. Zur Charakterisierung können sowohl die intrinsischen Fluoreszenzeigenschaften der HS als auch die von speziell eingeführten Lumineszenzsonden verwendet werden. In beiden Fällen werden die zu Grunde liegenden fundamentalen Konzepte der Wechselwirkungen von HS mit Xenobiotika untersucht und charakterisiert. Für die intrinsische Fluoreszenz der HS konnte gezeigt werden, dass neben molekularen Strukturen besonders die Verknüpfung der Fluorophore im Gesamt-HS-Molekül von Bedeutung ist. Konformative Freiheit und die Nachbarschaft zu als Energieakzeptor fungierenden HS-eigenen Gruppen sind wichtige Komponenten für die Charakteristik der HS-Fluoreszenz. Die Löschung der intrinsischen Fluoreszenz durch Metallkomplexierung ist demnach auch das Resultat der veränderten konformativen Freiheit der HS durch die gebundenen Metallionen. Es zeigte sich, dass abhängig vom Metallion sowohl Löschung als auch Verstärkung der intrinsischen HS-Fluoreszenz beobachtet werden kann. Als extrinsische Lumineszenzsonden mit wohl-charakterisierten photophysikalischen Eigenschaften wurden polyzyklische aromatische Kohlenwasserstoffe und Lanthanoid-Ionen eingesetzt. Durch Untersuchungen bei sehr niedrigen Temperaturen (10 K) konnte erstmals die Mikroumgebung von an HS gebundenen hydrophoben Xenobiotika untersucht werden. Im Vergleich mit Raumtemperaturexperimenten konnte gezeigt werden, dass hydrophobe Xenobiotika an HS-gebunden in einer Mikroumgebung, die in ihrer Polarität analog zu kurzkettigen Alkoholen ist, vorliegen. Für den Fall der Metallkomplexierung wurden Energietransferprozesse zwischen HS und Lanthanoidionen bzw. zwischen verschiedenen, gebundenen Lanthanoidionen untersucht. Basierend auf diesen Messungen können Aussagen über die beteiligten elektronischen Zustände der HS einerseits und Entfernungen von Metallbindungsstellen in HS selbst angeben werden. Es ist dabei zu beachten, dass die Experimente in Lösung bei realen Konzentrationen durchgeführt wurden. Aus Messung der Energietransferraten können direkte Aussagen über Konformationsänderungen bzw. Aggregationsprozesse von HS abgeleitet werden.
Food intake is driven by the need for energy but also by the demand for essential nutrients such as protein. Whereas it was well known how diets high in protein mediate satiety, it remained unclear how diets low in protein induce appetite. Therefore, this thesis aims to contribute to the research area of the detection of restricted dietary protein and adaptive responses.
This thesis provides clear evidence that the liver-derived hormone fibroblast growth factor 21 (FGF21) is an endocrine signal of a dietary protein restriction, with the cellular amino acid sensor general control nonderepressible 2 (GCN2) kinase acting as an upstream regulator of FGF21 during protein restriction. In the brain, FGF21 is mediating the protein-restricted metabolic responses, e.g. increased energy expenditure, food intake, insulin sensitivity, and improved glucose homeostasis. Furthermore, endogenous FGF21 induced by dietary protein or methionine restriction is preventing the onset of type 2 diabetes in the New Zealand Obese mouse.
Overall, FGF21 plays an important role in the detection of protein restriction and macronutrient imbalance in rodents and humans, and mediates both the behavioral and metabolic responses to dietary protein restriction. This makes FGF21 a critical physiological signal of dietary protein restriction, highlighting the important but often overlooked impact of dietary protein on metabolism and eating behavior, independent of dietary energy content.
NiFe hydrogenases
(2020)
Die verletzte Republik
(2022)
Die Studie stellt die Frage nach dem Beitrag erzählender Literatur zu einem Dialog über Formen der Gewalt im gesellschaftlichen Raum Frankreich zu Beginn des 21. Jahrhunderts.
Unter Rückgriff auf Bourdieu’sche Konzepte literatursoziologischer Theorie diskutiert sie zunächst die für ein sozialwissenschaftlich relevantes Erfassen des Wissens von Literatur notwendige Perspektive auf erzählte Gewalt. Bei dem dafür untersuchten Text-Korpus handelt es sich um vielrezipierte Erzähltexte des literarischen Feldes in Frankreich, welche größtenteils in der zweiten Dekade des 21. Jahrhunderts erschienen sind.
Ausgehend von theoretischen Überlegungen zu Grenzen und Möglichkeiten einer solchen feldsoziologischen Fokussierung auf die Literatur der unmittelbaren Gegenwart wird am konkreten Textmaterial und mit den Mitteln der Literaturwissenschaft untersucht, wie und warum die französische Literatur über unterschiedliche Formen von Gewalt, vom Erinnern an die historisch gewordenen Gewalttraumata des 20. Jahrhunderts, vom Terrorismus des 21. Jahrhunderts, von Rassismus und Klassismus der Gegenwart, von Femiziden und Homophobie, über «Abgehängte» in ländlichen Gebieten, aber auch im Zentrum der Metropole, über Arbeitslosigkeit und Armut in Frankreich erzählt.
Eröffnet werden soll eine komplementäre Perspektive der Literaturwissenschaft zur soziologischen und historischen Gewaltforschung über den gesellschaftlichen Raum unseres europäischen Nachbarn.
The habilitation deals with the numerical analysis of the recurrence properties of geological and climatic processes. The recurrence of states of dynamical processes can be analysed with recurrence plots and various recurrence quantification options. In the present work, the meaning of the structures and information contained in recurrence plots are examined and described. New developments have led to extensions that can be used to describe the recurring patterns in both space and time. Other important developments include recurrence plot-based approaches to identify abrupt changes in the system's dynamics, to detect and investigate external influences on the dynamics of a system, the couplings between different systems, as well as a combination of recurrence plots with the methodology of complex networks. Typical problems in geoscientific data analysis, such as irregular sampling and uncertainties, are tackled by specific modifications and additions. The development of a significance test allows the statistical evaluation of quantitative recurrence analysis, especially for the identification of dynamical transitions. Finally, an overview of typical pitfalls that can occur when applying recurrence-based methods is given and guidelines on how to avoid such pitfalls are discussed. In addition to the methodological aspects, the application potential especially for geoscientific research questions is discussed, such as the identification and analysis of transitions in past climates, the study of the influence of external factors to ecological or climatic systems, or the analysis of landuse dynamics based on remote sensing data.
Electrets are materials capable of storing oriented dipoles or an electric surplus charge for long periods of time. The term "electret" was coined by Oliver Heaviside in analogy to the well-known word "magnet". Initially regarded as a mere scientific curiosity, electrets became increasingly imporant for applications during the second half of the 20th century. The most famous example is the electret condenser microphone, developed in 1962 by Sessler and West. Today, these devices are produced in annual quantities of more than 1 billion, and have become indispensable in modern communications technology. Even though space-charge electrets are widely used in transducer applications, relatively little was known about the microscopic mechanisms of charge storage. It was generally accepted that the surplus charges are stored in some form of physical or chemical traps. However, trap depths of less than 2 eV, obtained via thermally stimulated discharge experiments, conflicted with the observed lifetimes (extrapolations of experimental data yielded more than 100000 years). Using a combination of photostimulated discharge spectroscopy and simultaneous depth-profiling of the space-charge density, the present work shows for the first time that at least part of the space charge in, e.g., polytetrafluoroethylene, polypropylene and polyethylene terephthalate is stored in traps with depths of up to 6 eV, indicating major local structural changes. Based on this information, more efficient charge-storing materials could be developed in the future. The new experimental results could only be obtained after several techniques for characterizing the electrical, electromechanical and electrical properties of electrets had been enhanced with in situ capability. For instance, real-time information on space-charge depth-profiles were obtained by subjecting a polymer film to short laser-induced heat pulses. The high data acquisition speed of this technique also allowed the three-dimensional mapping of polarization and space-charge distributions. A highly active field of research is the development of piezoelectric sensor films from electret polymer foams. These materials store charges on the inner surfaces of the voids after having been subjected to a corona discharge, and exhibit piezoelectric properties far superior to those of traditional ferroelectric polymers. By means of dielectric resonance spectroscopy, polypropylene foams (presently the most widely used ferroelectret) were studied with respect to their thermal and UV stability. Their limited thermal stability renders them unsuitable for applications above 50 °C. Using a solvent-based foaming technique, we found an alternative material based on amorphous Teflon® AF, which exhibits a stable piezoelectric coefficient of 600 pC/N at temperatures up to 120 °C.
Alfred Wegeners ideas on continental drift were doubted for several decades until the discovery of polarization changes at the Atlantic seafloor and the seismic catalogs imaging oceanic subduction underneath the continental crust (Wadati-Benioff Zone). It took another 20 years until plate motion could be directly observed and quantified by using space geodesy. Since then, it is unthinkable to do neotectonic research without the use of satellite-based methods.
Thanks to a tremendeous increase of instrumental observations in space and time over the last decades we significantly increased our knowledge on the complexity of the seismic cycle, that is, the interplay of tectonic stress build up and release. Our classical assumption, earthquakes were the only significant phenomena of strain release previously accumulated in a linear fashion, is outdated. We now know that this concept is actually decorated with a wide range of slow and fast processes such as triggered slip, afterslip, post-seismic and visco-elastic relaxation of the lower crust, dynamic pore-pressure changes in the elastic crust, aseismic creep, slow slip events and seismic swarms. On the basis of eleven peer-reviewed papers studies I here present the diversity of crustal deformation processes. Based on time-series analyses of radar imagery and satellited-based positioning data I quantify tectonic surface deformation and use numerical and analytical models and independent geologic and seismologic data to better understand the underlying crustal processes.
The main part of my work focuses on the deformation observed in the Pamir, the Hindu Kush and the Tian Shan that together build the highly active continental collision zone between Northwest-India and Eurasia. Centered around the Sarez earthquake that ruptured the center of the Pamir in 2015 I present diverse examples of crustal deformation phenomena. Driver of the deformation is the Indian indenter, bulldozing into the Pamir, compressing the orogen that then collapses westward into the Tajik depression. A second natural observatory of mine to study tectonic deformation is the oceanic subduction zone in Chile that repeatedly hosts large earthquakes of magnitude 8 and more. These are best to study post-seismic relaxation processes and coupling of large earthquake.
My findings nicely illustrate how complex fashion and how much the different deformation phenomena are coupled in space and time. My publications contribute to the awareness that the classical concept of the seismic cycle needs to be revised, which, in turn, has a large influence in the classical, probabilistic seismic hazard assessment that primarily relies on statistically solid recurrence times.
The ecohydrological transfers, interactions and degradation arising from high-intensity storm events
(2015)
Die Koloniale Karibik
(2012)
Werden nicht in der Karibik des 19. Jahrhunderts Phänomene und Prozesse vorweg-genommen, die heute erst ins Bewusstsein gelangen? Der Blick auf die kaleidoskopartige Welt der Karibik über literarische und kulturelle Transprozesse in jener Epoche erlaubt völlig neue Einsichten in die frühen Prozesse der kulturellen Globalisierung. Rassistische Diskurse, etablierte Modelle „weißer“ Abolitionisten, Erinnerungspolitiken und die bisher kaum wahrgenommene Rolle der haitianischen Revolution verbinden sich zu einem Amalgam, das unser gängiges Konzept einer genuin westlichen Moderne in Frage stellt.
Die Arbeit stellt die Funktionsweise und den Erwerb der deutschen Groß- und Kleinschreibung auf theoretischer und empirischer Grundlage dar. Den Ausgangspunkt bildet eine textpragmatische Verallgemeinerung bisheriger graphematischer Ansätze, die zu einem übergreifenden Modell des Majuskelgebrauchs im Deutschen erweitert werden und dabei auch nicht-orthografische Teilbereiche einschließen (Versalsatz, Kapitälchen, Binnenmajuskel etc.).
Im empirischen Teil der Arbeit werden die orthografischen Leistungsdaten von ca. 5.700 Probanden verschiedener Altersklassen (4. Klasse bis Erwachsenenbildung) untersucht und zu einem allgemeinen Erwerbsmodell der Groß- und Kleinschreibung ausgebaut. Mit Hilfe neuronaler Netzwerksimulationen werden unterschiedliche Lernertypen unterschieden und Diskontinuitäten im Kompetenzerwerb nachgewiesen, die auf qualitative Strategiewechsel in der Ontogenese hindeuten. Den Abschluss bilden orthografiedidaktische und rechtschreibdiagnostische Reflexionen der Daten.
Synchronization of coupled oscillators manifests itself in many natural and man-made systems, including cyrcadian clocks, central pattern generators, laser arrays, power grids, chemical and electrochemical oscillators, only to name a few. The mathematical description of this phenomenon is often based on the paradigmatic Kuramoto model, which represents each oscillator by one scalar variable, its phase. When coupled, phase oscillators constitute a high-dimensional dynamical system, which exhibits complex behaviour, ranging from synchronized uniform oscillation to quasiperiodicity and chaos. The corresponding collective rhythms can be useful or harmful to the normal operation of various systems, therefore they have been the subject of much research.
Initially, synchronization phenomena have been studied in systems with all-to-all (global) and nearest-neighbour (local) coupling, or on random networks. However, in recent decades there has been a lot of interest in more complicated coupling structures, which take into account the spatially distributed nature of real-world oscillator systems and the distance-dependent nature of the interaction between their components. Examples of such systems are abound in biology and neuroscience. They include spatially distributed cell populations, cilia carpets and neural networks relevant to working memory. In many cases, these systems support a rich variety of patterns of synchrony and disorder with remarkable properties that have not been observed in other continuous media. Such patterns are usually referred to as the coherence-incoherence patterns, but in symmetrically coupled oscillator systems they are also known by the name chimera states.
The main goal of this work is to give an overview of different types of collective behaviour in large networks of spatially distributed phase oscillators and to develop mathematical methods for their analysis. We focus on the Kuramoto models for one-, two- and three-dimensional oscillator arrays with nonlocal coupling, where the coupling extends over a range wider than nearest neighbour coupling and depends on separation. We use the fact that, for a special (but still quite general) phase interaction function, the long-term coarse-grained dynamics of the above systems can be described by a certain integro-differential equation that follows from the mathematical approach called the Ott-Antonsen theory. We show that this equation adequately represents all relevant patterns of synchrony and disorder, including stationary, periodically breathing and moving coherence-incoherence patterns. Moreover, we show that this equation can be used to completely solve the existence and stability problem for each of these patterns and to reliably predict their main properties in many application relevant situations.
Gewässer werden traditionellerweise als abgeschlossene Ökosysteme gesehen, und insbeson¬dere das Zirkulieren von Wasser und Nährstoffen im Pelagial von Seen wird als Beispiel dafür angeführt. Allerdings wurden in der jüngeren Vergangenheit wichtige Verknüpfungen des Freiwasserkörpers von Gewässern aufgezeigt, die einerseits mit dem Benthal und andererseits mit dem Litoral, der terrestrischen Uferzone und ihrem Einzugsgebiet bestehen. Dadurch hat in den vergangen Jahren die horizontale und vertikale Konnektivität der Gewässerökosysteme erhöhtes wissenschaftliches Interesse auf sich gezogen, und damit auch die ökologischen Funktionen des Gewässergrunds (Benthal) und der Uferzonen (Litoral). Aus der neu beschriebenen Konnektivität innerhalb und zwischen diesen Lebensräumen ergeben sich weitreichende Konsequenzen für unser Bild von der Funktionalität der Gewässer. In der vorliegenden Habilitationsschrift wird am Beispiel von Fließgewässern und Seen des nordostdeutschen Flachlandes eine Reihe von internen und externen funktionalen Verknüpfungen in den horizontalen und vertikalen räumlichen Dimensionen aufgezeigt. Die zugrunde liegenden Untersuchungen umfassten zumeist sowohl abiotische als auch biologische Variablen, und umfassten thematisch, methodisch und hinsichtlich der Untersuchungsgewässer ein breites Spektrum. Dabei wurden in Labor- und Feldexperimenten sowie durch quantitative Feldmes¬sungen ökologischer Schlüsselprozesse wie Nährstoffretention, Kohlenstoffumsatz, extrazellu¬läre Enzymaktivität und Ressourcenweitergabe in Nahrungsnetzen (mittels Stabilisotopen¬methode) untersucht. In Bezug auf Fließgewässer wurden dadurch wesentliche Erkenntnisse hinsichtlich der Wirkung einer durch Konnekticität geprägten Hydromorphologie auf die die aquatische Biodiversität und die benthisch-pelagische Kopplung erbracht, die wiederum einen Schlüsselprozess darstellt für die Retention von in der fließenden Welle transportierten Stoffen, und damit letztlich für die Produktivität eines Flussabschnitts. Das Litoral von Seen wurde in Mitteleuropa jahrzehntelang kaum untersucht, so dass die durchgeführten Untersuchungen zur Gemeinschaftsstruktur, Habitatpräferenzen und Nahrungs¬netzverknüpfungen des eulitoralen Makrozoobenthos grundlegend neue Erkenntnisse erbrach¬ten, die auch unmittelbar in Ansätze zur ökologischen Bewertung von Seeufern gemäß EG-Wasserrahmenrichtlinie eingehen. Es konnte somit gezeigt werden, dass die Intensität sowohl die internen als auch der externen ökologischen Konnektivität durch die Hydrologie und Morphologie der Gewässer sowie durch die Verfügbarkeit von Nährstoffen wesentlich beeinflusst wird, die auf diese Weise vielfach die ökologische Funktionalität der Gewässer prägen. Dabei trägt die vertikale oder horizontale Konnektivität zur Stabilisierung der beteiligten Ökosysteme bei, indem sie den Austausch ermöglicht von Pflanzennährstoffen, von Biomasse sowie von migrierenden Organismen, wodurch Phasen des Ressourcenmangels überbrückt werden. Diese Ergebnisse können im Rahmen der Bewirtschaftung von Gewässern dahingehend genutzt werden, dass die Gewährleistung horizontaler und vertikaler Konnektivität in der Regel mit räumlich komplexeren, diverseren, zeitlich und strukturell resilienteren sowie leistungsfähi¬geren Ökosystemen einhergeht, die somit intensiver und sicherer nachhaltig genutzt werden können. Die Nutzung einer kleinen Auswahl von Ökosystemleistungen der Flüsse und Seen durch den Menschen hat oftmals zu einer starken Reduktion der ökologischen Konnektivität, und in der Folge zu starken Verlusten bei anderen Ökosystemleistungen geführt. Die Ergebnisse der dargestellten Forschungen zeigen auch, dass die Entwicklung und Implementierung von Strategien zum integrierten Management von komplexen sozial-ökologischen Systemen wesentlich unterstützt werden kann, wenn die horizontale und vertikale Konnektivität gezielt entwickelt wird.
Ferroelectrets are internally charged polymer foams or cavity-containing polymer-_lm systems that combine large piezoelectricity with mechanical flexibility and elastic compliance. The term “ferroelectret” was coined based on the fact that it is a space-charge electret that also shows ferroic behavior. In this thesis, comprehensive work on ferroelectrets, and in particular on their preparation, their charging, their piezoelectricity and their applications is reported.
For industrial applications, ferroelectrets with well-controlled distributions or even uniform values of cavity size and cavity shape and with good thermal stability of the piezoelectricity are very desirable. Several types of such ferroelectrets are developed using techniques such as straightforward thermal lamination, sandwiching sticky templates with electret films, and screen printing. In particular, uoroethylenepropylene (FEP) _lm systems with tubular-channel openings, prepared by means of the thermal lamination technique, show piezoelectric d33 coefficients of up to 160 pC/N after charging through dielectric barrier discharges (DBDs) . For samples charged at suitable elevated temperatures, the piezoelectricity is stable at temperatures of at least 130°C. These preparation methods are easy to implement at laboratory or industrial scales, and are quite flexible in terms of material selection and cavity geometry design. Due to the uniform and well-controlled cavity structures, samples are also very suitable for fundamental studies on ferroelectrets.
Charging of ferroelectrets is achieved via a series of dielectric barrier discharges (DBDs) inside the cavities. In the present work, the DBD charging process is comprehensively studied by means of optical, electrical and electro-acoustic methods. The spectrum of the transient light from the DBDs in cellular polypropylene (PP) ferroelectrets directly confirms the ionization of molecular nitrogen, and allows the determination of the electric field in the discharge. Detection of the light emission reveals not only DBDs under high applied voltage but also back discharges when the applied voltage is reduced to sufficiently low values. Back discharges are triggered by the internally deposited charges, as the breakdown inside the cavities is controlled by the sum of the applied electric field and the electric field of the deposited charges. The remanent effective polarization is determined by the breakdown strength of the gas-filled cavities. These findings form the basis of more efficient charging techniques for ferroelectrets such as charging with high-pressure air, thermal poling and charging assisted by gas exchange. With the proposed charging strategies, the charging efficiency of ferroelectrets can be enhanced significantly.
After charging, the cavities can be considered as man-made macroscopic dipoles whose direction can be reversed by switching the polarity of the applied voltage. Polarization-versus-electric-field (P(E)) hysteresis loops in ferroelectrets are observed by means of an electro-acoustic method combined with dielectric resonance spectroscopy. P(E) hysteresis loops in ferrroelectrets are also obtained by more direct measurements using a modified Sawyer-Tower circuit. Hysteresis loops prove the ferroic behavior of ferroelectrets. However, repeated switching of the macroscopic dipoles involves complex physico-chemical processes. The DBD charging process generates a cold plasma with numerous active species and thus modifies the inner polymer surfaces of the cavities. Such treatments strongly affect the chargeability of the cavities. At least for cellular PP ferroelectrets, repeated DBDs in atmospheric conditions lead to considerable fatigue of the effective polarization and of the resulting piezoelectricity.
The macroscopic dipoles in ferroelectrets are highly compressible, and hence the piezoelectricity is essentially the primary effect. It is found that the piezoelectric d33 coefficient is proportional to the polarization and the elastic compliance of the sample, providing hints for developing materials with higher piezoelectric sensitivity in the future. Due to their outstanding electromechanical properties, there has been constant interest in the application of ferroelectrets. The antiresonance frequencies (fp) of ferroelectrets are sensitive to the boundary conditions during measurement. A tubular-channel FEP ferroelectret is conformably attached to a self-organized minimum-energy dielectric elastomer actuator (DEA). It turns out that the antiresonance frequency (fp) of the ferroelectret film changes noticeably with the bending angle of the DEA. Therefore, the actuation of DEAs can be used to modulate the fp value of ferroelectrets, but fp can also be exploited for in-situ diagnosis and for precise control of the actuation of the DEA. Combination of DEAs and ferroelectrets opens up various new possibilities for application.
It has been known for several years that under certain conditions electrons can be confined within thin layers even if these layers consist of metal and are supported by a metal substrate. In photoelectron spectra, these layers show characteristic discrete energy levels and it has turned out that these lead to large effects like the oscillatory magnetic coupling technically exploited in modern hard disk reading heads. The current work asks in how far the concepts underlying quantization in two-dimensional films can be transferred to lower dimensionality. This problem is approached by a stepwise transition from two-dimensional layers to one-dimensional nanostructures. On the one hand, these nanostructures are represented by terraces on atomically stepped surfaces, on the other hand by atom chains which are deposited onto these terraces up to complete coverage by atomically thin nanostripes. Furthermore, self organization effects are used in order to arrive at perfectly one-dimensional atomic arrangements at surfaces. Angle-resolved photoemission is particularly suited as method of investigation because is reveals the behavior of the electrons in these nanostructures in dependence of the spacial direction which distinguishes it from, e. g., scanning tunneling microscopy. With this method intense and at times surprisingly large effects of one-dimensional quantization are observed for various exemplary systems, partly for the first time. The essential role of bandgaps in the substrate known from two-dimensional systems is confirmed for nanostructures. In addition, we reveal an ambiguity without precedent in two-dimensional layers between spacial confinement of electrons on the one side and superlattice effects on the other side as well as between effects caused by the sample and by the measurement process. The latter effects are huge and can dominate the photoelectron spectra. Finally, the effects of reduced dimensionality are studied in particular for the d electrons of manganese which are additionally affected by strong correlation effects. Surprising results are also obtained here. ---------------------------- Die Links zur jeweiligen Source der im Appendix beigefügten Veröffentlichungen befinden sich auf Seite 83 des Volltextes.
Intermolekulare Desaktivierung zwischen einem angeregten Fluorophor und einem Löscher durch Elektronenübertragung kann mit dynamischer und statischer Löschung beschrieben werden. Es wird vorgeschlagen den dynamischen Löschprozess in Transport- und Wechselwirkungsphase einzuteilen. Ergebnisse der Löschung der N-Heteroarene durch Naphthalen bei hohen Löscherkonzentrationen werden mit der statischen Löschung beschrieben. Außerdem werden CT-Systeme untersucht. Nach einem Überblick über statische Modelle zum Resonanzenergietransfer wird ein aus der Treffertheorie abgeleitetes Modell vorgestellt und an Beispielen getestet. Die Experimente sind computergesteuert.
In a classical context, synchronization means adjustment of rhythms of self-sustained periodic oscillators due to their weak interaction. The history of synchronization goes back to the 17th century when the famous Dutch scientist Christiaan Huygens reported on his observation of synchronization of pendulum clocks: when two such clocks were put on a common support, their pendula moved in a perfect agreement. In rigorous terms, it means that due to coupling the clocks started to oscillate with identical frequencies and tightly related phases. Being, probably, the oldest scientifically studied nonlinear effect, synchronization was understood only in 1920-ies when E. V. Appleton and B. Van der Pol systematically - theoretically and experimentally - studied synchronization of triode generators. Since that the theory was well developed and found many applications. Nowadays it is well-known that certain systems, even rather simple ones, can exhibit chaotic behaviour. It means that their rhythms are irregular, and cannot be characterized only by one frequency. However, as is shown in the Habilitation work, one can extend the notion of phase for systems of this class as well and observe their synchronization, i.e., agreement of their (still irregular!) rhythms: due to very weak interaction there appear relations between the phases and average frequencies. This effect, called phase synchronization, was later confirmed in laboratory experiments of other scientific groups. Understanding of synchronization of irregular oscillators allowed us to address important problem of data analysis: how to reveal weak interaction between the systems if we cannot influence them, but can only passively observe, measuring some signals. This situation is very often encountered in biology, where synchronization phenomena appear on every level - from cells to macroscopic physiological systems; in normal states as well as in severe pathologies. With our methods we found that cardiovascular and respiratory systems in humans can adjust their rhythms; the strength of their interaction increases with maturation. Next, we used our algorithms to analyse brain activity of Parkinsonian patients. The results of this collaborative work with neuroscientists show that different brain areas synchronize just before the onset of pathological tremor. Morevoever, we succeeded in localization of brain areas responsible for tremor generation.
Biopsychosoziale Aspekte der beruflichen Wiedereingliederung nach kardiologischer Rehabilitation
(2020)
Over millennia, droughts could not be understood or defined but rather were associated with mystical connotations. To understand this natural hazards, we first needed to understand the laws of physics and then develop plausible explanations of inner workings of the hydrological cycle. Consequently, modeling and predicting droughts was out of the scope of mankind until the end of the last century. In recent studies, it is estimated that this natural hazard has caused billions of dollars in losses since 1900 and that droughts have affected 2.2 billion people worldwide between 1950 and 2014.
For these reasons, droughts have been identified by the IPCC as the trigger of a web of impacts across many sectors leading to land degradation, migration and substantial socio-economic costs. This thesis summarizes a decade of research carried out at the Helmholtz Centre for Environmental Research on the subject of drought monitoring, modeling, and forecasting, from local to continental scales. The overarching objectives of this study, systematically addressed in the twelve previous chapters, are: 1) Create the capability to seamless monitor and predict water fluxes at various spatial resolutions and temporal scales varying from days to centuries; 2) Develop and test a modeling chain for monitoring, forecasting and predicting drought events and related characteristics at national and continental scales; and 3) Develop drought indices and impact indicators that are useful for end-users. Key outputs of this study are: the development of the open source model mHM, the German Drought Monitor System, the proof of concept for an European multi-model for improving water managent from local to continental scales, and the prototype of a crop-yield drought impact model for Germany.
Carbon nitride semiconductors: properties and application as photocatalysts in organic synthesis
(2023)
Graphitic carbon nitrides (g-CNs) are represented by melon-type g-CN, poly(heptazine imides) (PHIs), triazine-based g-CN and poly(triazine imide) with intercalated LiCl (PTI/Li+Cl‒). These materials are composed of sp2-hybridized carbon and nitrogen atoms; C:N ratio is close to 3:4; the building unit is 1,3,5-triazine or tri-s-triazine; the building units are interconnected covalently via sp2-hybridized nitrogen atoms or NH-moieties; the layers are assembled into a stack via weak van der Waals forces as in graphite. Due to medium band gap (~2.7 eV) g-CNs, such as melon-type g-CN and PHIs, are excited by photons with wavelength ≤ 460 nm. Since 2009 g-CNs have been actively studied as photocatalysts in evolution of hydrogen and oxygen – two half-reactions of full water splitting, by employing corresponding sacrificial agents. At the same time application of g-CNs as photocatalysts in organic synthesis has been remaining limited to few reactions only. Cumulative Habilitation summarizes research work conducted by the group ‘Innovative Heterogeneous Photocatalysis’ between 2017-2023 in the field of carbon nitride organic photocatalysis, which is led by Dr. Oleksandr Savatieiev.
g-CN photocatalysts activate molecules, i.e. generate their more reactive open-shell intermediates, via three modes: i) Photoinduced electron transfer (PET); ii) Excited state proton-coupled electron transfer (ES-PCET) or direct hydrogen atom transfer (dHAT); iii) Energy transfer (EnT). The scope of reactions that proceed via oxidative PET, i.e. one-electron oxidation of a substrate to the corresponding radical cation, are represented by synthesis of sulfonylchlorides from S-acetylthiophenols. The scope of reactions that proceed via reductive PET, i.e. one-electron reduction of a substrate to the corresponding radical anion, are represented by synthesis of γ,γ-dichloroketones from the enones and chloroform.
Due to abundance of sp2-hybridized nitrogen atoms in the structure of g-CN materials, they are able to cleave X-H bonds in organic molecules and store temporary hydrogen atom. ES-PCET or dHAT mode of organic molecules activation to the corresponding radicals is implemented for substrates featuring relatively acidic X-H bonds and those that are characterized by low bond dissociation energy, such as C-H bond next to the heteroelements. On the other hand, reductively quenched g-CN carrying hydrogen atom reduces a carbonyl compound to the ketyl radical via PCET that is thermodynamically more favorable pathway compared to the electron transfer. The scope of these reactions is represented by cyclodimerization of α,β-unsaturated ketones to cyclopentanoles.
g-CN excited state demonstrates complex dynamics with the initial formation of singlet excited state, which upon intersystem crossing produces triplet excited state that is characterized by the lifetime > 2 μs. Due to long lifetime, g-CN activate organic molecules via EnT. For example, g-CN sensitizes singlet oxygen, which is the key intermediate in the dehydrogenation of aldoximes to nitrileoxides. The transient nitrileoxide undergoes [3+2]-cycloaddition to nitriles and gives oxadiazoles-1,2,4.
PET, ES-PCET and EnT are fundamental phenomena that are applied beyond organic photocatalysis. Hybrid composite is formed by combining conductive polymers, such as poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) with potassium poly(heptazine imide) (K-PHI). Upon PET, K-PHI modulated population of polarons and therefore conductivity of PEDOT:PSS. The initial state of PEDOT:PSS is recovered upon material exposure to O2. K-PHI:PEDOT:PSS may be applied in O2 sensing.
In the presence of electron donors, such as tertiary amines and alcohols, and irradiation with light, K-PHI undergoes photocharging – the g-CN material accumulates electrons and charge-compensating cations. Such photocharged state is stable under anaerobic conditions for weeks, but at the same time it is a strong reductant. This feature allows decoupling in time light harvesting and energy storage in the form of electron-proton couples from utilization in organic synthesis. The photocharged state of K-PHI reduces nitrobenzene to aniline, and enables dimerization of α,β-unsaturated ketones to hexadienones in dark.
The anatomically modern human Homo sapiens sapiens is distinguished by a high adaptability in physiology, physique and behaviour in short term changing environmental conditions. Since our environmental factors are constantly changing because of anthropogenic influences, the question arises as to how far we have an impact on the human phenotype in the very sensitive growth phase in children and adolescents. Growth and development of all children and adolescents follow a universal and typical pattern. This pattern has evolved as the result of trade-offs in the 6-7 million years of human evolution. This typically human growth pattern differs from that of other long-living social primate species. It can be divided into different biological age stages, with specific biological, cognitive and socio-cultural signs. Phenotypic plasticity is the ability of an organism to react to an internal or external environmental input with a change in the form, state, and movement rate of activity (West-Eberhard 2003). The plasticity becomes visible and measurable particularly when, in addition to the normal variability of the phenotypic characteristics within a population, the manifestation of this plasticity changes within a relatively short time. The focus of the present work is the comparison of age-specific dimensional changes. The basic of the presented studies are more than 75,000 anthropometric data-sets of children and adolescence from 1980 up today and historical data of height available in scientific literature. Due to reduced daily physical activity, today's 6-18 year-olds have lower values of pelvic and elbow breadths. The observed increase in body height can be explained by hierarchies in social networks of human societies, contrary to earlier explanations (influence of nutrition, good living conditions and genetics). A shift towards a more feminine fat distribution pattern in boys and girls is parallel to the increase in chemicals in our environment that can affect the hormone system. Changing environmental conditions can have selective effects over generations so that that genotype becomes increasingly prevalent whose individuals have a higher progeny rate than other individuals in this population. Those then form the phenotype which allows optimum adaptation to the changes of the environmental conditions. Due to the slow patterns of succession and the low progeny rate (Hawkes et al. 1998), fast visible in the phenotype due to changes in the genotype of a population are unlikely to occur in the case of Homo sapiens sapiens within short time. In the data sets on which the presented investigations are based, such changes appear virtually impossible. The study periods cover 5-30 to max.100 years (based on data from the body height from historical data sets).
In this work, the basic principles of self-organization of diblock copolymers having the in¬herent property of selective or specific non-covalent binding were examined. By the introduction of electrostatic, dipole–dipole, or hydrogen bonding interactions, it was hoped to add complexity to the self-assembled mesostructures and to extend the level of ordering from the nanometer to a larger length scale. This work may be seen in the framework of biomimetics, as it combines features of synthetic polymer and colloid chemistry with basic concepts of structure formation applying in supramolecular and biological systems. The copolymer systems under study were (i) block ionomers, (ii) block copolymers with acetoacetoxy chelating units, and (iii) polypeptide block copolymers.
The habilitation thesis presented here includes results from several studies dealing with fluid-rock interactions and rock deformation processes in active fault zones. The focus in all of these studies is on the influence of clay minerals on the geochemical and the hydro-mechanical behavior of the fault rocks. The research was conducted on rock cores and cuttings from four scientific drilling projects at the San Andreas Fault (USA), the Nankai Trough subduction zone and the Japan Trench subduction zone (Japan), as well as the Alpine Fault in New Zealand. These ICDP (International Continental Scientific Drilling Program) and IODP (International Ocean Discovery Program) funded projects were all conducted with the aim to monitor and better understand earthquakes.
Chapter 1 contains a short introduction to the topic with basic principles and objectives regarding the research approach. Chapter 2 describes the state of the art in clay mineral and fault zone science, gives a short description of the individual drilling projects and their locations on which the research was based, and summarizes the most important analytical methods used. Chapter 3 comprises ten peer-reviewed publications that are connected thematically and methodologically. The papers were published in the years 2006-2015, and additional related publications including myself as co-author are given in the literature list. The ten publications address different questions concerning the formation of clay minerals and processes of fluid-rock interaction in active fault zones. Six papers contain results from the SAFOD drilling project, USA (San Andreas Fault Observatory at Depth), with the main focus on fluid-rock interaction processes in fault rocks and the formation and location of clay minerals. Three publications report on research from the NanTroSEIZE drilling project (Nankai Trough Seismogenic Zone Experiment) and the JFAST drilling project (Japan Trench Fast Drilling Project). Both projects are situated in Japan. Here, the swelling behavior of smectite clay minerals in relation to changing environmental conditions (e.g. temperature and/or humidity) was investigated. The last publication included here concerns a study from the DFDP project (Deep Fault Drilling Project) in New Zealand, where I investigated the deformation of clay minerals on the context of the hydro-mechanical behavior of the fault zone rocks. I was first author in nine of the publications and in charge of the project preparation, measurements and data analyses, and the completion of the manuscript. As co-author on the other publication I was responsible for electronmicroscopy analyses (SEM and TEM) and their interpretation.
The key results from the publications in Chapter 3 are discussed in Chapter 4 with additional considerations from more recent papers. Following the major theses in Chapter 5, Chapter 6 highlights a future research project in clay mineralogy research at the GFZ. An appendix includes more detailed descriptions of the laboratory equipment and lists of all publications, conference contributions and teaching courses and modules.
In the present thesis, self-assembly of hydrophilic polymers, reinforced hydrogels and inorganic/polymer hybrids were examined. The thesis describes an avenue from polymer synthesis via various methods over polymer self-assembly to the formation of polymer materials that have promising properties for future applications.
Hydrophilic polymers were utilized to form multi-phase systems, water-in-water emulsions and self-assembled structures, e.g. particles/aggregates or hollow structures from completely water-soluble building blocks. The structuring of aqueous environments by hydrophilic homo and block copolymers was further utilized in the formation of supramolecular hydrogels with compartments or specific thermal behavior. Furthermore, inorganic graphitic carbon nitride (g-CN) was utilized as photoinitiator for hydrogel formation and as reinforcer for hydrogels. As such, hydrogels with remarkable mechanical properties were synthesized, e.g. high compressibility, high storage modulus or lubricity. In addition, g-CN was combined with polymers for a broad range of materials, e.g. coatings, films or latex, that could be utilized in photocatalytic applications. Another inorganic material class was combined with polymers in the present thesis as well, namely metal-organic frameworks (MOFs). It was shown that the pore structure of MOFs enables improved control over tacticity and achievement of high molar masses. Furthermore, MOF-based polymerization catalysis was introduced with improved control for coordinating monomers, catalyst recyclability and decreased metal contamination in the product. Finally, the effect of external influence on MOF morphology was studied, e.g. via solvent or polymer additives, which allowed the formation of various MOF structures.
Overall, advances in several areas of polymer science are presented in here. A major topic of the thesis was hydrophilic polymers and hydrogels that currently constitute significant materials in the polymer field due to promising future applications in biomedicine. Moreover, the combination of polymers with materials from other areas of research, i.e. g-CN and MOFs, provided various new materials with remarkable properties also of interest for applications in the future, e.g. coatings, particle structures and catalysis.
Rivers have always flooded their floodplains. Over 2.5 billion people worldwide have been affected by flooding in recent decades. The economic damage is also considerable, averaging 100 billion US dollars per year. There is no doubt that damage and other negative effects of floods can be avoided. However, this has a price: financially and politically. Costs and benefits can be estimated through risk assessments. Questions about the location and frequency of floods, about the objects that could be affected and their vulnerability are of importance for flood risk managers, insurance companies and politicians. Thus, both variables and factors from the fields of hydrology and sociol-economics play a role with multi-layered connections. One example are dikes along a river, which on the one hand contain floods, but on the other hand, by narrowing the natural floodplains, accelerate the flood discharge and increase the danger of flooding for the residents downstream. Such larger connections must be included in the assessment of flood risk. However, in current procedures this is accompanied by simplifying assumptions. Risk assessments are therefore fuzzy and associated with uncertainties.
This thesis investigates the benefits and possibilities of new data sources for improving flood risk assessment. New methods and models are developed, which take the mentioned interrelations better into account and also quantify the existing uncertainties of the model results, and thus enable statements about the reliability of risk estimates. For this purpose, data on flood events from various sources are collected and evaluated. This includes precipitation and flow records at measuring stations as well as for instance images from social media, which can help to delineate the flooded areas and estimate flood damage with location information. Machine learning methods have been successfully used to recognize and understand correlations between floods and impacts from a wide range of data and to develop improved models.
Risk models help to develop and evaluate strategies to reduce flood risk. These tools also provide advanced insights into the interplay of various factors and on the expected consequences of flooding. This work shows progress in terms of an improved assessment of flood risks by using diverse data from different sources with innovative methods as well as by the further development of models. Flood risk is variable due to economic and climatic changes, and other drivers of risk. In order to keep the knowledge about flood risks up-to-date, robust, efficient and adaptable methods as proposed in this thesis are of increasing importance.
The behaviour of an adhering cell is strongly influenced by the chemical, topographical and mechanical properties of the surface it attaches to. During recent years, it has been found experimentally that adhering cells actively sense the elastic properties of their environment by pulling on it through numerous sites of adhesion. The resulting build-up of force at sites of adhesion depends on the elastic properties of the environment and is converted into corresponding biochemical signals, which can trigger cellular programmes like growth, differentiation, apoptosis, and migration. In general, force is an important regulator of biological systems, for example in hearing and touch, in wound healing, and in rolling adhesion of leukocytes on vessel walls. In the habilitation thesis by Ulrich Schwarz, several theoretical projects are presented which address the role of forces and elasticity in cell adhesion. (1) A new method has been developed for calculating cellular forces exerted at sites of focal adhesion on micro-patterned elastic substrates. The main result is that cell-matrix contacts function as mechanosensors, converting internal force into protein aggregation. (2) A one-step master equation for the stochastic dynamics of adhesion clusters as a function of cluster size, rebinding rate and force has been solved both analytically and numerically. Moreover this model has been applied to the regulation of cell-matrix contacts, to dynamic force spectroscopy, and to rolling adhesion. (3) Using linear elasticity theory and the concept of force dipoles, a model has been introduced and solved which predicts the positioning and orientation of mechanically active cells in soft material, in good agreement with experimental observations for fibroblasts on elastic substrates and in collagen gels.
Fabricating electronic devices from natural, renewable resources has been a common goal in engineering and materials science for many years. In this regard, carbon is of special significance due to its biological compatibility. In the laboratory, carbonized materials and their composites have been proven as promising solutions for a range of future applications in electronics, optoelectronics, or catalytic systems. On the industrial scale, however, their application is inhibited by tedious and expensive preparation processes and a lack of control over the processing and material parameters. Therefore, we are exploring new concepts for the direct utilization of functional carbonized materials in electronic applications. In particular, laser-induced carbonization (carbon laser-patterning (CLaP)) is emerging as a new tool for the precise and selective synthesis of functional carbon-based materials for flexible on-chip applications.
We developed an integrated approach for on-the-spot laser-induced synthesis of flexible, carbonized films with specific functionalities. To this end, we design versatile precursor inks made from naturally abundant starting compounds and reactants to cast films which are carbonized with an infrared laser to obtain functional patterns of conductive porous carbon networks. In our studies we obtained deep mechanistic insights into the formation process and the microstructure of laser-patterned carbons (LP-C). We shed light on the kinetic reaction mechanism based on the interplay between the precursor properties and the reaction conditions. Furthermore, we investigated the use of porogens, additives, and reactants to provide a toolbox for the chemical and physical fine-tuning of the electronic and surface properties and the targeted integration of functional sites into the carbon network. Based on this knowledge, we developed prototype resistive chemical and mechanical sensors. In further studies, we show the applicability of LP-C as electrode materials in electrocatalytic and charge-storage applications.
To put our findings into a common perspective, our results are embedded into the context of general carbonization strategies, fundamentals of laser-induced materials processing, and a broad literature review on state-of-the-art laser-carbonization, in the general part.
The geochemical composition of oceanic basalts provides us with a window into the distribution of geochemical elements within the Earth’s mantle in space and time. In conjunction with a throughout knowledge on how the different elements behave e.g. during melt formation and evolution or on their partition behaviour between e.g. minerals and melts this information has been transformed into various models on how oceanic crust is formed along plume influenced or normal mid-ocean ridge segments, how oceanic crust evolves in response to seawater, on subduction recycling of oceanic crust and so forth. The work presented in this habilitation was aimed at refining existing models, putting further constraints on some of the major open questions in this field of research while at the same time trying to increase our knowledge on the behaviour of noble gases as a tracer for melt formation and evolution processes. In the line of this work the author and her co-workers were able to answer one of the major questions concerning the formation of oceanic crust along plume-influenced ridges – in which physical state does the plume material enter the ridge? Based on submarine volcanic glass He, Ne and Ar data, the author and her co-workers have shown that the interaction of mantle plumes with mid-ocean ridges occurs in the physical form of melts. In addition, the author and her co-workers have also put further constraints on one of the major questions concerning the formation of oceanic crust along normal mid-ocean ridges – namely how is the mid-ocean ridge system effectively cooled to form the lower oceanic crust? Based on Ne and Ar data in combination with Cl/K ratios of basaltic glass from the Mid-Atlantic ridge and estimates of crystallisation pressures they have shown, that seawater penetration reaches lower crustal levels close to the Moho, indicating that hydrothermal circulation might be an effective cooling mechanism even for the deep parts of the oceanic crust. Considering subduction recycling, the heterogeneity of the Earth’s mantle and mantle dynamic processes the key question is on which temporal and spatial scales is the Earth’s mantle geochemically heterogeneous? In the line of this work the author along with her co-workers have shown based on Cl/K ratios in conjunction with the Sr, Nd, and Pb isotopes of the OIBs representing the type localities for the different mantle endmembers that the quantity of Cl recycled into the mantle via subduction is not uniform and that neither the HIMU nor the EM1 and EM2 mantle components can be considered as distinct mantle endmembers. In addition, we have shown, based on He, Ne and Ar isotope and trace-element data from the Foundation hotspot that the near ridge seamounts of the Foundation seamount chain formed by the Foundation hotspot erupt lavas with a trace-element signature clearly characteristic of oceanic gabbro which indicates the existence of recycled, virtually unchanged lower oceanic crust in the plume source. This is a clear sign of the inefficiency of the stirring mechanism existing at mantle depth. Similar features are seen in other near-axis hotspot magmas around the world. Based on He, Sr, Nd, Pb and O isotopes and trace elements in primitive mafic dykes from the Etendeka flood basalts, NW Namibia the author along with her co-workers have shown that deep, less degassed mantle material carried up by a mantle plume contributed significantly to the flood basalt magmatism. The Etendeka flood basalts are part of the South Atlantic LIP, which is associated with the breakup of Gondwana, the formation of the Paraná-Etendeka flood basalts and the Walvis Ridge - Tristan da Cunha hotspot track. Thus reinforcing the lately often-challenged concept of mantle plumes and the role of mantle plumes in the formation of large igneous provinces. Studying the behaviour of noble gases during melt formation and evolution the author along with her co-workers has shown that He can be considerable more susceptible to changes during melt formation and evolution resulting not only in a complete decoupling of He isotopes from e.g. Ne or Pb isotopes but also in a complete loss of the primary mantle isotope signal. They have also shown that this decoupling occurs mainly during the melt formation processes requiring He to be more compatible during mantle melting than Ne. In addition, the author along with her co workers were able to show that incorporation of atmospheric noble gases into igneous rocks is in general a two-step process: (1) magma contamination by assimilation of altered oceanic crust results in the entrainment of air-equilibrated seawater noble gases; (2) atmospheric noble gases are adsorbed onto grain surfaces during sample preparation. This implies, considering the ubiquitous presence of the contamination signal, that magma contamination by assimilation of a seawater-sourced component is an integral part of mid-ocean ridge basalt evolution.
Die vorliegende Arbeit versammelt zwei einleitende Kapitel und zehn Essays, die sich als kritisch-konstruktive Beiträge zu einem "erlebenden Verstehen" (Buck) von Physik lesen lassen. Die traditionelle Anlage von Schulphysik zielt auf eine systematische Darstellung naturwissenschaftlichen Wissens, das dann an ausgewählten Beispielen angewendet wird: Schulexperimente beweisen die Aussagen der Systematik (oder machen sie wenigstens plausibel), ausgewählte Phänomene werden erklärt. In einem solchen Rahmen besteht jedoch leicht die Gefahr, den Bezug zur Lebenswirklichkeit oder den Interessen der Schüler zu verlieren. Diese Problematik ist seit mindestens 90 Jahren bekannt, didaktische Antworten - untersuchendes Lernen, Kontextualisierung, Schülerexperimente etc. - adressieren allerdings eher Symptome als Ursachen. Naturwissenschaft wird dadurch spannend, dass sie ein spezifisch investigatives Weltverhältnis stiftet: man müsste gleichsam nicht Wissen, sondern "Fragen lernen" (und natürlich auch, wie Antworten gefunden werden...). Doch wie kann dergleichen auf dem Niveau von Schulphysik aussehen, was für einen theoretischen Rahmen kann es hier geben? In den gesammelten Arbeiten wird einigen dieser Spuren nachgegangen: Der Absage an das zu modellhafte Denken in der phänomenologischen Optik, der Abgrenzung formal-mathematischen Denkens gegen wirklichkeitsnähere Formen naturwissenschaftlicher Denkbewegungen und Evidenz, dem Potential alternativer Interpretationen von "Physikunterricht", der Frage nach dem "Verstehen" u.a. Dabei werden nicht nur Bezüge zum modernen bildungstheoretischen Paradigma der Kompetenz sichtbar, sondern es wird auch versucht, eine ganze Reihe konkrete (schul-)physikalische Beispiele dafür zu geben, was passiert, wenn nicht schon gewusste Antworten Thema werden, sondern Expeditionen, die sich der physischen Welt widmen: Die Schlüsselbegriffe des Fachs, die Methoden der Datenerhebung und Interpretation, die Such- und Denkbewegungen kommen dabei auf eine Weise zur Sprache, die sich nicht auf die Fachsystematik abstützen möchte, sondern diese motivieren, konturieren und verständlich machen will.
This thesis deals with different aspects of flood risk in Germany. In twelve papers new scientific findings about flood hazards, factors that influence flood losses as well as effective private precautionary measures are presented. The seasonal distribution of flooding is shown for the whole of Germany. Furthermore, possible impacts of climate change on discharge and flood frequencies are estimated for the catchment of the river Rhine. Moreover, it is simulated at reaches of the Lower Rhine, which effects may result from levee breaches. Flood losses are the focus of the second part of the thesis: After the flood in August 2002 approximately 1700 households were interviewed by telephone. By this, it was possible to quantify the influence of different factors such as flood duration or the contamination of the flood water with oil on the extent of financial flood damage. On this basis, a new model was derived, by which flood losses can be calculated on a large scale. On the other hand, it was possible to derive recommendations for the improvement of private precaution. For example, the analysis revealed that insured households were compensated more quickly and to a better degree than uninsured. It became also clear that different groups like tenants and homeowners have different capabilities of performing precaution. This is to be considered in future risk communication. In 2005 and 2006, the rivers Elbe and Danube were again affected by flooding. A renewed pool among households and public authorities enabled us to investigate the improvement of flood risk management and the precaution in the City of Dresden. Several methods and finding of this thesis are applicable for water resources management issues and contribute to an improvement of flood risk analysis and management in Germany.
Hydrothermal carbonisation
(2013)
The world’s appetite for energy is producing growing quantities of CO2, a pollutant that contributes to the warming of the planet and which currently cannot be removed or stored in any significant way. Other natural reserves are also being devoured at alarming rates and current assessments suggest that we will need to identify alternative sources in the near future. With the aid of materials chemistry it should be possible to create a world in which energy use needs not be limited and where usable energy can be produced and stored wherever it is needed, where we can minimize and remediate emissions as new consumer products are created, whilst healing the planet and preventing further disruptive and harmful depletion of valuable mineral assets. In achieving these aims, the creation of new and very importantly greener industries and new sustainable pathways are crucial. In all of the aforementioned applications, new materials based on carbon, ideally produced via inexpensive, low energy consumption methods, using renewable resources as precursors, with flexible morphologies, pore structures and functionalities, are increasingly viewed as ideal candidates to fulfill these goals. The resulting materials should be a feasible solution for the efficient storage of energy and gases. At the end of life, such materials ideally must act to improve soil quality and to act as potential CO2 storage sinks. This is exactly the subject of this habilitation thesis: an alternative technology to produce carbon materials from biomass in water using low carbonisation temperatures and self-generated pressures. This technology is called hydrothermal carbonisation. It has been developed during the past five years by a group of young and talented researchers working under the supervision of Dr. Titirici at the Max-Planck Institute of Colloids and Interfaces and it is now a well-recognised methodology to produce carbon materials with important application in our daily lives. These applications include electrodes for portable electronic devices, filters for water purification, catalysts for the production of important chemicals as well as drug delivery systems and sensors.
On the effects of disorder on the ability of oscillatory or directional dynamics to synchronize
(2024)
In this thesis I present a collection of publications of my work, containing analytic results and observations in numerical experiments on the effects of various inhomogeneities, on the ability of coupled oscillators to synchronize their collective dynamics. Most of these works are concerned with the effects of Gaussian and non-Gaussian noise acting on the phase of autonomous oscillators (Secs. 2.1-2.4) or on the direction of higher dimensional state vectors (Secs. 2.5,2.6). I obtain exact and approximate solutions to the non-linear equations governing the distributions of phases, or perform linear stability analysis of the uniform distribution to obtain the transition point from a completely disordered state to partial order or more complicated collective behavior. Other inhomogeneities, that can affect synchronization of coupled oscillators, are irregular, chaotic oscillations or a complex, and possibly random structure in the coupling network. In Section 2.9 I present a new method to define the phase- and frequency linear response function for chaotic oscillators. In Sections 2.4, 2.7 and 2.8 I study synchronization in complex networks of coupled oscillators. Each section in Chapter 2 - Manuscripts, is devoted to one research paper and begins with a list of the main results, a description of my contributions to the work and a short account of the scientific context, i.e. the questions and challenges which started the research and the relation of the work to my other research projects. The manuscripts in this thesis are reproductions of the arXiv versions, i.e. preprints under the creative commons licence.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
By using mouse outcross populations in combination with bioinformatic approaches, it was possible to identify and characterize novel genes regulating body weight, fat mass and β-cell function, which all contribute to the pathogenesis of obesity and T2D. In detail, the presented studies identified 1. Ifi202b/IFI16 as adipogenic gene involved in adipocyte commitment, maintenance of white adipocyte identity, fat cell size and the inflammatory state of adipose tissue. 2. Pla2g4a/PLA2G4A as gene linked to increased body weight and fat mass with a higher expression in adipose tissue of obese mice and pigs as well as in obese human subjects. 3. Ifgga2/IRGM as novel regulator of lipophagy protecting from excess hepatic lipid accumulation. 4. Nidd/DBA as a diabetogenic locus containing Kti12, Osbpl9, Ttc39a and Calr4 with differential expression in pancreatic islets and/or genetic variants. 5. miR-31 to be higher expressed in adipose tissue of obese and diabetic mice and humans targeting PPARy and GLUT4 and thereby involved in adipogenesis and insulin signaling. 6. Gjb4 as novel gene triggering the development of T2D by reducing insulin secretion, inducing apoptosis and inhibiting proliferation. The performed studies confirmed the complexity and strong genetic heritability character of obesity and T2D. A high number of genetic variations, each with a small effect, are collectively influencing the degree and severity of the disease. The use of mouse outcross populations is a valid tool for disease gene identification; however, to facilitate and accelerate the process of gene identification the combination of mouse cross data with advanced sequencing resources and the publicly available data sets are essential. The main goal for future studies should be the translation of these novel molecular discoveries to useful treatment therapies. More recently, several classes of novel unimolecular combination therapeutics have emerged with superior efficacy than currently prescribed options and pose the potential to reverse obesity and T2D (Finan et al., 2015). The glucagon-like peptide-1 (GLP-1)- estrogen conjugate, which targets estrogen into cells expressing GLP-1 receptors, was shown to improve energy, glucose and lipid metabolism as well as to reduce food reward (Finan et al., 2012; Schwenk et al., 2014; Vogel et al., 2016). Another possibility is the development of miRNA-based therapeutics to prevent obesity and T2D, such as miRNA mimetics, anti-miRNA oligonucleotides and exosomes loaded with miRNAs (Ji and Guo, 2019; Gottmann et al., 2020). As already described, genome-wide association studies for polygenic obesity and T2D traits in humans have also led to the identification of numerous gene variants with modest effect, most of them having an unknown function (Yazdi et al., 2015). These discoveries resulted in novel animal models and have illuminated new biologic pathways. Therefore, the integration of mouse-human genetic approaches and the utilization of the synergistic effects have the potential to lead to the identification of more genes responsible for common Mendelian forms of obesity and T2D, as well as gene × gene and gene × environment interactions (Yazdi et al., 2015; Ingelsson and McCarthy, 2018). This combination may help to unravel the missing heritability of obesity and T2D, to identify novel drug targets and to design more efficient and personalized obesity prevention and management programs.
The Arctic plays a key role in Earth’s climate system as global warming is predicted to be most pronounced at high latitudes and because one third of the global carbon pool is stored in ecosystems of the northern latitudes. In order to improve our understanding of the present and future carbon dynamics in climate sensitive permafrost ecosystems, the present study concentrates on investigations of microbial controls of methane fluxes, on the activity and structure of the involved microbial communities, and on their response to changing environmental conditions. For this purpose an integrated research strategy was applied, which connects trace gas flux measurements to soil ecological characterisation of permafrost habitats and molecular ecological analyses of microbial populations. Furthermore, methanogenic archaea isolated from Siberian permafrost have been used as potential keystone organisms for studying and assessing life under extreme living conditions. Long-term studies on methane fluxes were carried out since 1998. These studies revealed considerable seasonal and spatial variations of methane emissions for the different landscape units ranging from 0 to 362 mg m-2 d-1. For the overall balance of methane emissions from the entire delta, the first land cover classification based on Landsat images was performed and applied for an upscaling of the methane flux data sets. The regionally weighted mean daily methane emissions of the Lena Delta (10 mg m-2 d-1) are only one fifth of the values calculated for other Arctic tundra environments. The calculated annual methane emission of the Lena Delta amounts to about 0.03 Tg. The low methane emission rates obtained in this study are the result of the used remotely sensed high-resolution data basis, which provides a more realistic estimation of the real methane emissions on a regional scale. Soil temperature and near soil surface atmospheric turbulence were identified as the driving parameters of methane emissions. A flux model based on these variables explained variations of the methane budget corresponding to continuous processes of microbial methane production and oxidation, and gas diffusion through soil and plants reasonably well. The results show that the Lena Delta contributes significantly to the global methane balance because of its extensive wetland areas. The microbiological investigations showed that permafrost soils are colonized by high numbers of microorganisms. The total biomass is comparable to temperate soil ecosystems. Activities of methanogens and methanotrophs differed significantly in their rates and distribution patterns along both the vertical profiles and the different investigated soils. The methane production rates varied between 0.3 and 38.9 nmol h-1 g-1, while the methane oxidation ranged from 0.2 to 7.0 nmol h-1 g-1. Phylogenetic analyses of methanogenic communities revealed a distinct diversity of methanogens affiliated to Methanomicrobiaceae, Methanosarcinaceae and Methanosaetaceae, which partly form four specific permafrost clusters. The results demonstrate the close relationship between methane fluxes and the fundamental microbiological processes in permafrost soils. The microorganisms do not only survive in their extreme habitat but also can be metabolic active under in situ conditions. It was shown that a slight increase of the temperature can lead to a substantial increase in methanogenic activity within perennially frozen deposits. In case of degradation, this would lead to an extensive expansion of the methane deposits with their subsequent impacts on total methane budget. Further studies on the stress response of methanogenic archaea, especially Methanosarcina SMA-21, isolated from Siberian permafrost, revealed an unexpected resistance of the microorganisms against unfavourable living conditions. A better adaptation to environmental stress was observed at 4 °C compared to 28 °C. For the first time it could be demonstrated that methanogenic archaea from terrestrial permafrost even survived simulated Martian conditions. The results show that permafrost methanogens are more resistant than methanogens from non-permafrost environments under Mars-like climate conditions. Microorganisms comparable to methanogens from terrestrial permafrost can be seen as one of the most likely candidates for life on Mars due to their physiological potential and metabolic specificity.
In der vorliegenden Arbeit werden verschiedene Experimente zur Untersuchung der elektrischen Leitfähigkeit von Sutur- und Kollisionszonen im Zusammenhang diskutiert, um die Möglichkeiten, die die moderne Magnetotellurik (MT) für das Abbilden fossiler tektonischer Systeme bietet, aufzuzeigen. Aus den neuen hochauflösenden Abbildern der elektrischen Leitfähigkeit können potentielle Gemeinsamkeiten verschiedener tektonischer Einheiten abgeleitet werden. Innerhalb der letzten Dekade haben sich durch die Weiterentwicklung der Messgeräte und der Auswerte- und Interpretationsmethoden völlig neue Perspektiven für die geodynamische Tiefensondierung ergeben. Dies wird an meinen Forschungsarbeiten deutlich, die ich im Rahmen von Projekten selbst eingeworben und am Deutschen GeoForschungsZentrum Potsdam durchgeführt habe. In Tabelle A habe ich die in dieser Arbeit berücksichtigten Experimente aufgeführt, die in den letzten Jahren entweder als Array- oder als Profilmessungen durchgeführt wurden. Für derart große Feldexperimente benötigt man ein Team von WissenschaftlerInnen, StudentInnen und technischem Personal. Das bedeutet aber auch, dass von mir betreute StudentInnen und DoktorandInnen Teilaspekte dieser Experimente in Form von Diplom-, Bachelor- und Mastersarbeiten oder Promotionsschriften verarbeitet haben. Bei anschließender Veröffentlichung der Arbeiten habe ich als Co-Autor mitgewirkt. Die beiliegenden Veröffentlichungen enthalten eine Einführung in die Methode der Magnetotellurik und gegebenenfalls die Beschreibung neu entwickelter Methoden. Eine allgemeine Darstellung der theoretischen Grundlagen der Magnetotellurik findet man zum Beispiel in Chave & Jones (2012); Simpson & Bahr (2005); Kaufman & Keller (1981); Nabighian (1987); Weaver (1994). Die Arbeit beinhaltet zudem ein Glossar, in dem einige Begriffe und Abkürzungen erklärt werden. Ich habe mich entschieden, Begriffe, für die es keine adäquate deutsche Übersetzung gibt oder die im Deutschen eine andere oder missverständliche Bedeutung bekommen, auf Englisch in der Arbeit zu belassen. Sie sind durch eine kursive Schreibweise gekennzeichnet.