Refine
Year of publication
Document Type
- Habilitation Thesis (103) (remove)
Is part of the Bibliography
- yes (103) (remove)
Keywords
- Biophysik (3)
- biophysics (3)
- Datenanalyse (2)
- Oberfläche (2)
- Resonanzenergietransfer (2)
- Selbstorganisation (2)
- Stochastische Prozesse (2)
- Zelladhäsion (2)
- carbon nitride (2)
- cell adhesion (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Biochemie und Biologie (17)
- Institut für Chemie (15)
- Institut für Geowissenschaften (12)
- Institut für Umweltwissenschaften und Geographie (6)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Ernährungswissenschaft (5)
- Institut für Romanistik (4)
- Department Psychologie (2)
- Extern (2)
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
This professorial dissertation thesis collects several empirical studies on tax distribution and tax reform in Germany. Chapter 2 deals with two studies on effective income taxation, based on representative micro data sets from tax statistics. The first study analyses the effective income taxation at the individual level, in particular with respect to the top incomes. It is based on an integrated micro data file of household survey data and income tax statistics, which captures the entire income distribution up to the very top. Despite substantial tax base erosion and reductions of top tax rates, the German personal income tax has remained effectively progressive. The distribution of the tax burden is highly concentrated and the German economic elite is still taxed relatively heavily, even though the effective tax rate for this group has significantly declined. The second study of Chapter 2 highlights the effective income taxation of functional income sources, such as labor income, business and capital income, etc. Using income tax micro data and microsimulation models, we allocate the individual income tax liability to the respective income sources, according to different apportionment schemes accounting for losses. We find that the choice of the apportionment scheme markedly affects the tax shares of income sources and implicit tax rates, in particular those of capital income. Income types without significant losses such as labor income or transfer incomes show higher tax shares and implicit tax rates if we account for losses. The opposite is true for capital income, in particular for income from renting and leasing. Chapter 3 presents two studies on business taxation, based on representative micro data sets from tax statistics and the microsimulation model BizTax. The first part provides a study on fundamental reform options for the German local business tax. We find that today’s high concentration of local business tax revenues on corporations with high profits decreases if the tax base is broadened by integrating more taxpayers and by including more elements of business value added. The reform scenarios with a broader tax base distribute the local business tax revenue per capita more equally across regional categories. The second study of Chapter 3 discusses the macroeconomic performance of business taxation against the background of corporate income. A comparison of the tax base reported in tax statistics with the macroeconomic corporate income from national accounts gives hints to considerable tax base erosion. The average implicit tax rate on corporate income was around 20 percent since 2001, and thus falling considerably short of statutory tax rates and effective tax rates discussed in the literature. For lack of detailed accounting data it is hard to give precise reasons for the presumptive tax base erosion. Chapter 4 deals with several assessment studies on the ecological tax reform implemented in Germany as of 1999. First, we describe the scientific, ideological, and political background of the ecological tax reform. Further, we present the main findings of a first systematic impact analysis. We employ two macroeconomic models, an econometric input-output model and a recursive-dynamic computable general equilibrium (CGE) model. Both models show that Germany’s ecological tax reform helps to reduce energy consumption and CO2 emissions without having a substantial adverse effect on overall economic growth. It could have a slightly positive effect on employment. The reform’s impact on the business sector and the effects of special provisions granted to agriculture and the goods and materials sectors are outlined in a further study. The special provisions avoid higher tax burdens on the energy-intensive production. However, they widely reduce the marginal tax rates and thus the incentives to energy saving. Though the reform of special provisions 2003 increased the overall tax burden of the energy-intensive industry, the enlarged eligibility for tax rebates neutralizes the ecologic incentives. Based on the Income and Consumption Survey of 2003, we have analyzed the distributional impact of the ecological tax reform. The increased energy taxes show a clear regressive impact relative to disposable income. Families with children face a higher tax burden relative to household income. The reduction of pension contributions and the automatic adjustment of social security transfers widely mitigate this regressive impact. Households with low income or with many children nevertheless bear a slight increase in tax burden. Refunding the eco tax revenue by an eco bonus would make the reform clearly progressive.
This cumulative habilitation thesis presents new work on the systematics, paleoecology, and evolution of antelopes and other large mammals, focusing mainly on the late Miocene to Pleistocene terrestrial fossil record of Africa and Arabia. The studies included here range from descriptions of new species to broad-scale analyses of diversification and community evolution in large mammals over millions of years. A uniting theme is the evolution, across both temporal and spatial scales, of the environments and faunas that characterize modern African savannas today. One conclusion of this work is that macroevolutionary changes in large mammals are best characterized at regional (subcontinental to continental) and long-term temporal scales. General views of evolution developed on records that are too restricted in spatial and temporal extent are likely to ascribe too much influence to local or short-lived events. While this distinction in the scale of analysis and interpretation may seem trivial, it is challenging to implement given the geographically and temporally uneven nature of the fossil record, and the difficulties of synthesizing spatially and temporally dispersed datasets. This work attempts to do just that, bringing together primary fossil discoveries from eastern Africa to Arabia, from the Miocene to the Pleistocene, and across a wide range of (mainly large mammal) taxa. The end result is support for hypotheses stressing the impact of both climatic and biotic factors on long-term faunal change, and a more geographically integrated view of evolution in the African fossil record.
Biogene Amine sind kleine organische Verbindungen, die sowohl bei Wirbeltieren als auch bei Wirbellosen als Neurotransmitter, Neuromodulatoren und/oder Neurohormone wirken können. Sie bilden eine bedeutende Gruppe von Botenstoffen und entfalten ihre Wirkungen über die Bindung an eine bestimmte Klasse von Rezeptorproteinen, die als G-Protein-gekoppelte Rezeptoren bezeichnet werden. Bei Insekten gehören zur Substanzklasse der biogenen Amine die Botenstoffe Dopamin, Tyramin, Octopamin, Serotonin und Histamin. Neben vielen anderen Wirkung ist z.B. gezeigt worden, daß einige dieser biogenen Amine bei der Honigbiene (Apis mellifera) die Geschmacksempfindlichkeit für Zuckerwasser-Reize modulieren können. Ich habe verschiedene Aspekte der aminergen Signaltransduktion an den „Modellorganismen“ Honigbiene und Amerikanische Großschabe (Periplaneta americana) untersucht. Aus der Honigbiene, einem „Modellorganismus“ für das Studium von Lern- und Gedächtnisvorgängen, wurden zwei Dopamin-Rezeptoren, ein Tyramin-Rezeptor, ein Octopamin-Rezeptor und ein Serotonin-Rezeptor charakterisiert. Die Rezeptoren wurden in kultivierten Säugerzellen exprimiert, um ihre pharmakologischen und funktionellen Eigenschaften (Kopplung an intrazelluläre Botenstoffwege) zu analysieren. Weiterhin wurde mit Hilfe verschiedener Techniken (RT-PCR, Northern-Blotting, in situ-Hybridisierung) untersucht, wo und wann während der Entwicklung die entsprechenden Rezeptor-mRNAs im Gehirn der Honigbiene exprimiert werden. Als Modellobjekt zur Untersuchung der zellulären Wirkungen biogener Amine wurden die Speicheldrüsen der Amerikanischen Großschabe genutzt. An isolierten Speicheldrüsen läßt sich sowohl mit Dopamin als auch mit Serotonin Speichelproduktion auslösen, wobei Speichelarten unterschiedlicher Zusammensetzung gebildet werden. Dopamin induziert die Bildung eines völlig proteinfreien, wäßrigen Speichels. Serotonin bewirkt die Sekretion eines proteinhaltigen Speichels. Die Serotonin-induzierte Proteinsekretion wird durch eine Erhöhung der Konzentration des intrazellulären Botenstoffs cAMP vermittelt. Es wurden die pharmakologischen Eigenschaften der Dopamin-Rezeptoren der Schaben-Speicheldrüsen untersucht sowie mit der molekularen Charakterisierung putativer aminerger Rezeptoren der Schabe begonnen. Weiterhin habe ich das ebony-Gen der Schabe charakterisiert. Dieses Gen kodiert für ein Enzym, das wahrscheinlich bei der Schabe (wie bei anderen Insekten) an der Inaktivierung biogener Amine beteiligt ist und im Gehirn und in den Speicheldrüsen der Schabe exprimiert wird.
Die klassische Physik/Chemie unterscheidet zwischen drei Bindungstypen: Der kovalenten Bindung, der ionischen Bindung und der metallischen Bindung. Moleküle untereinander werden hingegen durch schwache Wechselwirkungen zusammen gehalten, sie sind trotz ihrer schwachen Kräfte weniger verstanden, aber dabei nicht weniger wichtig. In zukunftsweisenden Gebieten wie der Nanotechnologie, der Supramolekularen Chemie und Biochemie sind sie von elementarer Bedeutung.
Um schwache, intermolekulare Wechselwirkungen zu beschreiben, vorauszusagen und zu verstehen, sind sie zunächst theoretisch zu erfassen. Hierzu gehören verschiedene quantenchemische Methoden, die in dieser Arbeit vorgestellt, verglichen, weiterentwickelt und schließlich auch exemplarisch auf Problemstellungen in der Chemie angewendet werden. Aufbauend auf einer Hierarchie von Methoden unterschiedlicher Genauigkeit werden sie für diese Ziele eingesetzt, ausgearbeitet und kombiniert.
Berechnet wird die Elektronenstruktur, also die Verteilung und Energie von Elektronen, die im Wesentlichen die Atome zusammen halten. Da Ungenauigkeiten von der Beschreibung der Elektronenstruktur von den verwendeten Methoden abhängen, kann man die Effekte detailliert untersuchen, sie beschreiben und darauf aufbauend weiter entwickeln, um sie anschließend an verschiedenen Modellen zu testen. Die Geschwindigkeit der Berechnungen mit modernen Computern ist eine wesentliche, zu berücksichtigende Komponente, da im Allgemeinen die Genauigkeit mit der Rechenzeit exponentiell steigt, und die damit an die Grenzen der Möglichkeiten stoßen muss.
Die genaueste der verwendeten Methoden basiert auf der Coupled-Cluster-Theorie, die sehr gute Voraussagen ermöglicht. Für diese wird eine sogenannte spektroskopische Genauigkeit mit Abweichungen von wenigen Wellenzahlen erzielt, was Vergleiche mit experimentellen Daten zeigen. Eine Möglichkeit zur Näherung von hochgenauen Methoden basiert auf der Dichtefunktionaltheorie: Hier wurde das „Boese-Martin for Kinetics“ (BMK)-Funktional entwickelt, dessen Funktionalform sich in vielen nach 2010 veröffentlichten Dichtefunktionalen wiederfindet.
Mit Hilfe der genaueren Methoden lassen sich schließlich semiempirische Kraftfelder zur Beschreibung intermolekularer Wechselwirkungen für individuelle Systeme parametrisieren, diese benötigen weit weniger Rechenzeit als die Methoden, die auf der genauen Berechnung der Elektronenstruktur von Molekülen beruhen.
Für größere Systeme lassen sich auch verschiedene Methoden kombinieren. Dabei wurden Einbettungsverfahren verfeinert und mit neuen methodischen Ansätzen vorgeschlagen. Sie verwenden sowohl die symmetrieadaptierte Störungstheorie als auch die quantenchemische Einbettung von Fragmenten in größere, quantenchemisch berechnete Systeme.
Die Entwicklungen neuer Methoden beziehen ihren Wert im Wesentlichen durch deren Anwendung:
In dieser Arbeit standen zunächst die Wasserstoffbrücken im Vordergrund. Sie zählen zu den stärkeren intermolekularen Wechselwirkungen und sind nach wie vor eine Herausforderung. Im Gegensatz dazu sind van-der-Waals Wechselwirkungen relativ einfach durch Kraftfelder zu beschreiben. Deshalb sind viele der heute verwendeten Methoden für Systeme, in denen Wasserstoffbrücken dominieren, vergleichsweise schlecht.
Eine Untersuchung molekularer Aggregate mit Auswirkungen intermolekularer Wechselwirkungen auf die Schwingungsfrequenzen von Molekülen schließt sich an. Dabei wird auch über die sogenannte starrer-Rotor-harmonischer-Oszillator-Näherung hinausgegangen.
Eine weitreichende Anwendung behandelt Adsorbate, hier die von Molekülen auf ionischen/metallischen Oberflächen. Sie können mit ähnlichen Methoden behandelt werden wie die intermolekularen Wechselwirkungen, und sind mit speziellen Einbettungsverfahren sehr genau zu beschreiben. Die Resultate dieser theoretischen Berechnungen stimulierten eine Neubewertung der bislang bekannten experimentellen Ergebnisse.
Molekulare Kristalle sind ein äußerst wichtiges Forschungsgebiet. Sie werden durch schwache Wechselwirkungen zusammengehalten, die von van-der-Waals Kräften bis zu Wasserstoffbrücken reichen. Auch hier wurden neuentwickelte Methoden eingesetzt, die eine interessante, mindestens ebenso genaue Alternative zu den derzeit gängigen Methoden darstellen.
Von daher sind die entwickelten Methoden, als auch deren Anwendung äußerst vielfältig. Die behandelten Berechnungen der Elektronenstruktur erstrecken sich von den sogenannten post-Hartree-Fock-Methoden über den Einsatz der Dichtefunktionaltheorie bis zu semiempirischen Kraftfeldern und deren Kombinationen. Die Anwendung reicht von einzelnen Molekülen in der Gasphase über die Adsorption auf Oberflächen bis zum molekularen Festkörper.
Individuals differ in their tendency to perceive injustice and in their responses towards these perceptions. Those high in justice sensitivity tend to show intense negative affective, cognitive, and behavioral responses towards injustice that in part also depend on the perspective from which injustice is perceived. The present research project showed that inter-individual differences in justice sensitivity may already be measured and observed in childhood and adolescence and that early adolescence seems an important age-range and developmental stage for the stabilization of these differences. Furthermore, the different justice sensitivity perspectives were related to different forms of externalizing (aggression, ADHD, bullying) and internalizing problem behavior (depressive symptoms) both in children and adolescents as well as in adults in cross-sectional studies. Particularly victim sensitivity may apparently constitute an important risk factor for a broad range of both externalizing and internalizing maladaptive behaviors and mental health problems as shown in those studies using longitudinal data. Regarding aggressive behavior, victim justice sensitivity may even constitute a risk factor above and beyond other important and well-established risk factors for aggression and similar sensitivity constructs that had previously been linked to this kind of behavior. In contrast, observer and perpetrator sensitivity (perpetrator sensitivity in particular) tended to show negative links with externalizing problem behavior and instead predicted prosocial behavior in children and adolescents. However, there were also detached positive relations of perpetrator sensitivity with emotional problems as well as of observer sensitivity with reactive aggression and depressive symptoms. Taken together, the findings from the present research show that justice sensitivity forms in childhood at the latest and that it may have important, long-term influences on pro- and antisocial behavior and mental health. Thus, justice sensitivity requires more attention in research on the prevention and intervention of mental health problems and antisocial behavior, such as aggression.
Parsability approaches of several grammar formalisms generating also non-context-free languages are explored. Chomsky grammars, Lindenmayer systems, grammars with controlled derivations, and grammar systems are treated. Formal properties of these mechanisms are investigated, when they are used as language acceptors. Furthermore, cooperating distributed grammar systems are restricted so that efficient deterministic parsing without backtracking becomes possible. For this class of grammar systems, the parsing algorithm is presented and the feature of leftmost derivations is investigated in detail.
This habilitation thesis summarises the research work performed by the author during the last quindecennial period. The dissertation reflects his main research interests, which revolve around quantum dynamics of small-sized molecular systems, including their interactions with electromagnetic radiation or dissipative environments. This covers various dynamical processes that involve bound-bound, bound-free, and free-free molecular transitions. The latter encompass light-triggered rovibrational or rovibronic dynamics in bound molecules, molecular photodissociation induced by weak or strong laser fields, state-to-state reactive and/or inelastic molecular collisions, and phonon-driven vibrational relaxation of adsorbates at solid surfaces. Although the dissertation covers different topics of molecular reaction dynamics, most of these studies focus on nuclear quantum effects and their manifestations in experimental measures. The latter are assessed through comparison between quantum and classical predictions, and/or direct confrontation of theory and experiment. Most well known quantum concepts and effects will be encountered in this work. Yet, almost all these quantum notions find their roots in the central pillar of quantum theory, namely, the quantum superposition principle. Indeed, quantum coherence is the main source of most quantum effects, including interference, entanglement, and even tunneling. Thus, the common and predominant theme of all the investigations of this thesis is quantum coherence, and the survival or quenching of subsequent interference effects in various molecular processes. The lion's share of the dissertation is devoted to two associated quantum concepts, which are usually overlooked in computational molecular dynamics, viz. the Berry phase and identical nuclei symmetry. The importance of the latter in dynamical molecular processes and their direct fingerprints in experimental observables also rely very much on quantum coherence and entanglement. All these quantum phenomena are thoroughly discussed within the four main topics that form the core of this thesis. Each topic is described in a separate chapter, where it is briefly summarised and then illustrated with three peer-reviewed publications. The first topic deals with the relevance of quantum coherence/interference in molecular collisions, with a focus on the hydrogen-exchange reaction, H+H2 --> H2+H, and its isotopologues. For these collision processes, the significance of interference of probability amplitudes arises because of the existence of two main scattering pathways. The latter could be inelastic and reactive scattering, direct and time-delayed scattering, or two encircling reaction paths that loop in opposite senses around a conical intersection (CI) of the H3 molecular system. Our joint theoretical-experimental investigations of these processes reveal strong interference and geometric phase (GP) effects in state-to-state reaction probabilities and differential cross sections. However, these coherent effects completely cancel in integral cross sections and reaction rate constants, due to efficient dephasing of interference between the different scattering amplitudes. As byproducts of these studies, we highlight the discovery of two novel scattering mechanisms, which contradict conventional textbook pictures of molecular reaction dynamics. The second topic concerns the effect of the Berry phase on molecular photodynamics at conical intersections. To understand this effect, we developed a topological approach that separates the total molecular wavefunction of an unbound molecular system into two components, which wind in opposite senses around the conical intersection. This separation reveals that the only effect of the geometric phase is to change the sign of the relative phase of these two components. This in turn leads to a shift in the interference pattern of the molecular system---a phase shift that is reminiscient of the celebrated Aharonov-Bohm effect. This procedure is numerically illustrated with photodynamics at model standard CIs, as well as strong-field dissociation of diatomics at light-induced conical intersections (LICIs). Besides the fundamental aspect of these studies, their findings allow to interpret and predict the effect of the GP on the state-resolved or angle-resolved spectra of pump-probe experimental schemes, particularly the distributions of photofragments in molecular photodissociation experiments. The third topic pertains to the role of the indistinguishability of identical nuclei in molecular reaction dynamics, with an emphasis on dynamical localization in highly symmetric molecules. The main object of these studies is whether nuclear-spin statistics allow dynamical localization of the electronic, vibrational, or even rotational density on a specific molecular substructure or configuration rather than on another one which is identical (indistinguishable). Group-theoretic analysis of the symmetrized molecular wavefunctions of these systems shows that nuclear permutation symmetry engenders quantum entanglement between the eigenstates of the different molecular degrees of freedom. This subsequently leads to complete quenching of dynamical localization over indistinguishable molecular substructures---an observation that is in sharp contradiction with well known textbook views of iconic molecular processes. This is illustrated with various examples of quantum dynamics in symmetric double-well achiral molecules, such as the prototypical umbrella inversion motion of ammonia, electronic Kekulé dynamics in the benzene molecule, and coupled electron-nuclear dynamics in laser-induced indirect photodissociation of the dihydrogen molecular cation. The last part of the thesis is devoted to the development of approximate wavefunction approaches for phonon-induced vibrational relaxation of adsorbates (system) at surfaces (bath). Due to the so-called 'curse of dimensionality', these system-bath complexes cannot be handled with standard wavefunction methods. To alleviate the exponential scaling of the latter, we developed approximate yet quite accurate numerical schemes that have a polynomial scaling with respect to the bath dimensionality. The corresponding algorithms combine symmetry-based reductions of the full vibrational Hilbert space and iterative Krylov techniques. These approximate wavefunction approaches resemble the 'Bixon-Jortner model' and the more general 'quantum tier model'. This is illustrated with the decay of H-Si (D-Si) vibrations on a fully H(D)-covered silicon surface, which is modelled with a phonon-bath of more than two thousand oscillators. These approximate methods allow reliable estimation of the adsorbate vibrational lifetimes, and provide some insight into vibration-phonon couplings at solid surfaces. Although this topic is mainly computational, the developed wavefunction approaches permit to describe quantum entanglement between the system and bath states, and to embody some coherent effects in the time-evolution of the (sub-)system, which cannot be accounted for with the widely used 'reduced density matrix formalism'.
The direct conversion of light from the sun into usable forms of energy marks one of the central cornerstones of the change of our living from the use of fossil, non-renewable energy resources towards a more sustainable economy. Besides the necessary societal changes necessary, it is the understanding of the solids employed that is of particular importance for the success of this target. In this work, the principles and approaches of systematic-crystallographic characterisation and systematisation of solids is used and employed to allow a directed tuning of the materials properties. The thorough understanding of the solid-state forms hereby the basis, on which more applied approaches are founded.
Two material systems, which are considered as promising solar absorber materials, are at the core of this work: halide perovskites and II-IV-N2 nitride materials. While the first is renowned for its high efficiencies and rapid development in the last years, the latter is putting an emphasis on true sustainability in that toxic and scarce elements are avoided.
Continental rift systems open up unique possibilities to study the geodynamic system of our planet: geodynamic localization processes are imprinted in the morphology of the rift by governing the time-dependent activity of faults, the topographic evolution of the rift or by controlling whether a rift is symmetric or asymmetric. Since lithospheric necking localizes strain towards the rift centre, deformation structures of previous rift phases are often well preserved and passive margins, the end product of continental rifting, retain key information about the tectonic history from rift inception to continental rupture.
Current understanding of continental rift evolution is based on combining observations from active rifts with data collected at rifted margins. Connecting these isolated data sets is often accomplished in a conceptual way and leaves room for subjective interpretation. Geodynamic forward models, however, have the potential to link individual data sets in a quantitative manner, using additional constraints from rock mechanics and rheology, which allows to transcend previous conceptual models of rift evolution. By quantifying geodynamic processes within continental rifts, numerical modelling allows key insight to tectonic processes that operate also in other plate boundary settings, such as mid ocean ridges, collisional mountain chains or subduction zones.
In this thesis, I combine numerical, plate-tectonic, analytical, and analogue modelling approaches, whereas numerical thermomechanical modelling constitutes the primary tool. This method advanced rapidly during the last two decades owing to dedicated software development and the availability of massively parallel computer facilities. Nevertheless, only recently the geodynamical modelling community was able to capture 3D lithospheric-scale rift dynamics from onset of extension to final continental rupture.
The first chapter of this thesis provides a broad introduction to continental rifting, a summary of the applied rift modelling methods and a short overview of previews studies. The following chapters, which constitute the main part of this thesis feature studies on plate boundary dynamics in two and three dimension followed by global scale analyses (Fig. 1).
Chapter II focuses on 2D geodynamic modelling of rifted margin formation. It highlights the formation of wide areas of hyperextended crustal slivers via rift migration as a key process that affected many rifted margins worldwide. This chapter also contains a study of rift velocity evolution, showing that rift strength loss and extension velocity are linked through a dynamic feed-back. This process results in abrupt accelerations of the involved plates during rifting illustrating for the first time that rift dynamics plays a role in changing global-scale plate motions. Since rift velocity affects key processes like faulting, melting and lower crustal flow, this study also implies that the slow-fast velocity evolution should be imprinted in rifted margin structures.
Chapter III relies on 3D Cartesian rift models in order to investigate various aspects of rift obliquity. Oblique rifting occurs if the extension direction is not orthogonal to the rift trend. Using 3D lithospheric-scale models from rift initialisation to breakup I could isolate a characteristic evolution of dominant fault orientations. Further work in Chapter III addresses the impact of rift obliquity on the strength of the rift system. We illustrate that oblique rifting is mechanically preferred over orthogonal rifting, because the brittle yielding requires a lower tectonic force. This mechanism elucidates rift competition during South Atlantic rifting, where the more oblique Equatorial Atlantic Rift proceeded to breakup while the simultaneously active but less oblique West African rift system became a failed rift. Finally this Chapter also investigates the impact of a previous rift phase on current tectonic activity in the linkage area of the Kenyan with Ethiopian rift. We show that the along strike changes in rift style are not caused by changes in crustal rheology. Instead the rift linkage pattern in this area can be explained when accounting for the thinned crust and lithosphere of a Mesozoic rift event.
Chapter IV investigates rifting from the global perspective. A first study extends the oblique rift topic of the previous chapter to global scale by investigating the frequency of oblique rifting during the last 230 million years. We find that approximately 70% of all ocean-forming rift segments involved an oblique component of extension where obliquities exceed 20°. This highlights the relevance of 3D approaches in modelling, surveying, and interpretation of many rifted margins. In a final study, we propose a link between continental rift activity, diffuse CO2 degassing and Mesozoic/Cenozoic climate changes. We used recent CO2 flux measurements in continental rifts to estimate worldwide rift-related CO2 release, which we based on the global extent of rifts through time. The first-order correlation to paleo-atmospheric CO2 proxy data suggests that rifts constitute a major element of the global carbon cycle.
Und der Zukunft abgewandt
(2010)
Seit dem Ende der DDR, das den Zusammenbruch des Ostblocks und damit die Beendigung des »Kalten Kriegs« einleitete, wird verstärkt versucht, das Wesen dieses Staates zu definieren und damit seine Folgen auf wirtschaftlicher, sozialer, psychologischer und bildungspolitischer Ebene zu verstehen und einzuordnen. Alexandra Budke analysiert in diesem Band das Schulfach Geographie, das neben der Staatsbürgerkunde und der Geschichte ein zentrales Fach war und in dem die in den Lehrplänen definierte »staatsbürgerliche, weltanschauliche oder ideologische Erziehung« auf der Grundlage des Marxismus-Leninismus stattfinden sollte. Sie klärt, inwiefern Geographieunterricht in der DDR genutzt wurde, um geopolitische Interessen des Staates zu kommunizieren und zu verbreiten. Damit lässt sich durch die detaillierte Analyse des Fachunterrichts auch die Frage beantworten, ob SchülerInnen im Unterricht politisch manipuliert wurden und welche Handlungsmöglichkeiten die zentralen Akteure des Unterrichts, die LehrerInnen und die SchülerInnen, im Rahmen der durch die Bildungspolitik gesetzten curricularen Vorgaben wahrgenommen haben.
Controlling interactions in synthetic polymers as precisely as in proteins would have a strong impact on polymer science. Advanced structural and functional control can lead to rational design of, integrated nano- and microstructures. To achieve this, properties of monomer sequence defined oligopeptides were exploited. Through their incorporation as monodisperse segments into synthetic polymers we learned in recent four years how to program the structure formation of polymers, to adjust and exploit interactions in such polymers, to control inorganic-organic interfaces in fiber composites and induce structure in Biomacromolecules like DNA for biomedical applications.
Die Plastizität der Gefühle
(2021)
Das emotionale Leben wird zunehmend durch digitale Technologien ausgelesen, reguliert und produziert. Diese gleichermaßen von Hoffnungen und Ängsten begleitete Entwicklung ist die vorerst letzte Station einer bis in die Frühgeschichte zurückreichenden, tiefgehenden Verschränkung von Affekt und (Kultur-)Technik. Bernd Bösel eröffnet einen umfassenden genealogischen Blick auf die epochenmachenden Neujustierungen dieser Technisierung. Denn erst im Nachvollzug der verschiedenen Logiken des Verfügens über Affekte wird es möglich, die Verflechtung der Technisierungsformen zu verstehen, auf denen die Psychomacht der Gegenwart basiert.
Klinische Analyse der physiologischen und pathologischen Sehnenadaptation an sportliche Belastung
(2021)
Devotio malefica
(2021)
Antike Fluchrituale zielten darauf ab, die jeweilige Gerechtigkeitsvorstellung der Verfluchenden durchzusetzen – insbesondere wenn weder das öffentliche Justizsystem noch gesellschaftlich anerkannte Verhaltenskodize dem Anspruch gerecht werden konnten. In den Ritualen kamen sogenannte defixionis tabellae (Fluchtafeln) zur Anwendung, die hier devotiones maleficae genannt werden. Sie bestehen meistens aus eingeschriebenen Bleilamellen und wurden für die Beschädigung eines oder mehrerer Opfer angefertigt.
Sara Chiarini untersucht die dabei verwendete Fluchsprache, die durch ihre formelhaften Strukturen und Bestandteile auf eine Tradition des Fluchrituals hindeuten. Individuelle Ergänzungen bieten hingegen Hinweise auf die Bedingungen um die Entstehung des Rituals, die Gefühlslage der Verfluchenden und die Arten von Bestrafungen, die der rechtlichen Dimension des Rituals entsprechen. Chiarini ergänzt den bisherigen Forschungstand anhand der neu entdeckten und veröffentlichten Fluchtafeln und setzt sich umfassend mit diesem epigraphischen Material auseinander.
Eco-physiological processes are expressing the interaction of organisms within an environmental context of their habitat and their degree of adaptation, level of resistance as well as the limits of life in a changing environment. The present study focuses on observations achieved by methods used in this scientific discipline of “Ecophysiology” and to enlarge the scientific context in a broader range of understanding with universal character. The present eco-physiological work is building the basis for classifying and exploring the degree of habitability of another planet like Mars by a bio-driven experimentally approach. It offers also new ways of identifying key-molecules which are playing a specific role in physiological processes of tested organisms to serve as well as potential biosignatures in future space exploration missions with the goal to search for life. This has important implications for the new emerging scientific field of Astrobiology. Astrobiology addresses the study of the origin, evolution, distribution and future of life in the universe. The three fundamental questions which are hidden behind this definition are: how does life begin and evolve? Is there life beyond Earth and, if so, how can we detect it? What is the future of life on Earth and in the universe? It means that this multidisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System. It comprises the search for the evidence of prebiotic chemistry and life on Mars and other bodies in our Solar System like the icy moons of the Jovian and Saturnian system, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in space. For this purpose an integrated research strategy was applied, which connects field research, laboratory research allowing planetary simulation experiments with investigation enterprises performed in space (particularly performed in the low Earth Orbit.
Kaliumionen (K<sup>+) sind die am häufigsten vorkommenden anorganischen Kationen in Pflanzen. Gemessen am Trockengewicht kann ihr Anteil bis zu 10% ausmachen. Kaliumionen übernehmen wichtige Funktionen in verschiedenen Prozessen in der Pflanze. So sind sie z.B. essentiell für das Wachstum und für den Stoffwechsel. Viele wichtige Enzyme arbeiten optimal bei einer K<sup>+ Konzentration im Bereich von 100 mM. Aus diesem Grund halten Pflanzenzellen in ihren Kompartimenten, die am Stoffwechsel beteiligt sind, eine kontrollierte Kaliumkonzentration von etwa 100 mM aufrecht. Die Aufnahme von Kaliumionen aus dem Erdreich und deren Transport innerhalb der Pflanze und innerhalb einer Pflanzenzelle wird durch verschiedene Kaliumtransportproteine ermöglicht. Die Aufrechterhaltung einer stabilen K<sup>+ Konzentration ist jedoch nur möglich, wenn die Aktivität dieser Transportproteine einer strikten Kontrolle unterliegt. Die Prozesse, die die Transportproteine regulieren, sind bis heute nur ansatzweise verstanden. Detailliertere Kenntnisse auf diesem Gebiet sind aber von zentraler Bedeutung für das Verständnis der Integration der Transportproteine in das komplexe System des pflanzlichen Organismus. In dieser Habilitationsschrift werden eigene Publikationen zusammenfassend dargestellt, in denen die Untersuchungen verschiedener Regulationsmechanismen pflanzlicher Kaliumkanäle beschrieben werden. Diese Untersuchungen umfassen ein Spektrum aus verschiedenen proteinbiochemischen, biophysikalischen und pflanzenphysiologischen Analysen. Um die Regulationsmechanismen grundlegend zu verstehen, werden zum einen ihre strukturellen und molekularen Besonderheiten untersucht. Zum anderen werden die biophysikalischen und reaktionskinetischen Zusammenhänge der Regulationsmechanismen analysiert. Die gewonnenen Erkenntnisse erlauben eine neue, detailliertere Interpretation der physiologischen Rolle der Kaliumtransportproteine in der Pflanze.
Biological materials, in addition to having remarkable physical properties, can also change shape and volume. These shape and volume changes allow organisms to form new tissue during growth and morphogenesis, as well as to repair and remodel old tissues. In addition shape or volume changes in an existing tissue can lead to useful motion or force generation (actuation) that may even still function in the dead organism, such as in the well known example of the hygroscopic opening or closing behaviour of the pine cone. Both growth and actuation of tissues are mediated, in addition to biochemical factors, by the physical constraints of the surrounding environment and the architecture of the underlying tissue. This habilitation thesis describes biophysical studies carried out over the past years on growth and swelling mediated shape changes in biological systems. These studies use a combination of theoretical and experimental tools to attempt to elucidate the physical mechanisms governing geometry controlled tissue growth and geometry constrained tissue swelling. It is hoped that in addition to helping understand fundamental processes of growth and morphogenesis, ideas stemming from such studies can also be used to design new materials for medicine and robotics.
Biological materials have ever been used by humans because of their remarkable properties. This is surprising since the materials are formed under physiological conditions and with commonplace constituents. Nature thus not only provides us with inspiration for designing new materials but also teaches us how to use soft molecules to tune interparticle and external forces to structure and assemble simple building blocks into functional entities. Magnetotactic bacteria and their chain of magnetosomes represent a striking example of such an accomplishment where a very simple living organism controls the properties of inorganics via organics at the nanometer-scale to form a single magnetic dipole that orients the cell in the Earth magnetic field lines. My group has developed a biological and a bio-inspired research based on these bacteria. My research, at the interface between chemistry, materials science, physics, and biology focuses on how biological systems synthesize, organize and use minerals. We apply the design principles to sustainably form hierarchical materials with controlled properties that can be used e.g. as magnetically directed nanodevices towards applications in sensing, actuating, and transport. In this thesis, I thus first present how magnetotactic bacteria intracellularly form magnetosomes and assemble them in chains. I developed an assay, where cells can be switched from magnetic to non-magnetic states. This enabled to study the dynamics of magnetosome and magnetosome chain formation. We found that the magnetosomes nucleate within minutes whereas chains assembles within hours. Magnetosome formation necessitates iron uptake as ferrous or ferric ions. The transport of the ions within the cell leads to the formation of a ferritin-like intermediate, which subsequently is transported and transformed within the magnetosome organelle in a ferrihydrite-like precursor. Finally, magnetite crystals nucleate and grow toward their mature dimension. In addition, I show that the magnetosome assembly displays hierarchically ordered nano- and microstructures over several levels, enabling the coordinated alignment and motility of entire populations of cells. The magnetosomes are indeed composed of structurally pure magnetite. The organelles are partly composed of proteins, which role is crucial for the properties of the magnetosomes. As an example, we showed how the protein MmsF is involved in the control of magnetosome size and morphology. We have further shown by 2D X-ray diffraction that the magnetosome particles are aligned along the same direction in the magnetosome chain. We then show how magnetic properties of the nascent magnetosome influence the alignment of the particles, and how the proteins MamJ and MamK coordinate this assembly. We propose a theoretical approach, which suggests that biological forces are more important than physical ones for the chain formation. All these studies thus show how magnetosome formation and organization are under strict biological control, which is associated with unprecedented material properties. Finally, we show that the magnetosome chain enables the cells to find their preferred oxygen conditions if the magnetic field is present. The synthetic part of this work shows how the understanding of the design principles of magnetosome formation enabled me to perform biomimetic synthesis of magnetite particles within the highly desired size range of 25 to 100 nm. Nucleation and growth of such particles are based on aggregation of iron colloids termed primary particles as imaged by cryo-high resolution TEM. I show how additives influence magnetite formation and properties. In particular, MamP, a so-called magnetochrome proteins involved in the magnetosome formation in vivo, enables the in vitro formation of magnetite nanoparticles exclusively from ferrous iron by controlling the redox state of the process. Negatively charged additives, such as MamJ, retard magnetite nucleation in vitro, probably by interacting with the iron ions. Other additives such as e.g. polyarginine can be used to control the colloidal stability of stable-single domain sized nanoparticles. Finally, I show how we can “glue” magnetic nanoparticles to form propellers that can be actuated and swim with the help of external magnetic fields. We propose a simple theory to explain the observed movement. We can use the theoretical framework to design experimental conditions to sort out the propellers depending on their size and effectively confirm this prediction experimentally. Thereby, we could image propellers with size down to 290 nm in their longer dimension, much smaller than what perform so far.
Line driven winds are accelerated by the momentum transfer from photons to a plasma, by absorption and scattering in numerous spectral lines. Line driving is most efficient for ultraviolet radiation, and at plasma temperatures from 10^4 K to 10^5 K. Astronomical objects which show line driven winds include stars of spectral type O, B, and A, Wolf-Rayet stars, and accretion disks over a wide range of scales, from disks in young stellar objects and cataclysmic variables to quasar disks. It is not yet possible to solve the full wind problem numerically, and treat the combined hydrodynamics, radiative transfer, and statistical equilibrium of these flows. The emphasis in the present writing is on wind hydrodynamics, with severe simplifications in the other two areas. I consider three topics in some detail, for reasons of personal involvement. 1. Wind instability, as caused by Doppler de-shadowing of gas parcels. The instability causes the wind gas to be compressed into dense shells enclosed by strong shocks. Fast clouds occur in the space between shells, and collide with the latter. This leads to X-ray flashes which may explain the observed X-ray emission from hot stars. 2. Wind runaway, as caused by a new type of radiative waves. The runaway may explain why observed line driven winds adopt fast, critical solutions instead of shallow (or breeze) solutions. Under certain conditions the wind settles on overloaded solutions, which show a broad deceleration region and kinks in their velocity law. 3. Magnetized winds, as launched from accretion disks around stars or in active galactic nuclei. Line driving is assisted by centrifugal forces along co-rotating poloidal magnetic field lines, and by Lorentz forces due to toroidal field gradients. A vortex sheet starting at the inner disk rim can lead to highly enhanced mass loss rates.
Highly collimated, high velocity streams of hot plasma – the jets – are observed as a general phenomenon being found in a variety of astrophysical objects regarding their size and energy output. Known as jet sources are protostellar objects (T Tauri stars, embedded IR sources), galactic high energy sources ("microquasars"), and active galactic nuclei (extragalactic radio sources and quasars). Within the last two decades our knowledge regarding the processes involved in astro-physical jet formation has condensed in a kind of standard model. This is the scenario of a magnetohydrodynamically accelerated and collimated jet stream launched from the innermost part of an accretion disk close to the central object. Traditionally, the problem of jet formation is divided in two categories. One is the question how to collimate and accelerate an uncollimated low velocity disk wind into a jet. The second is the question how to initiate that outflow from a disk, i.e. how to turn accretion of matter into an ejection as a disk wind. My own work is mainly related to the first question, the collimation and acceleration process. Due to the complexity of both, the physical processes believed to be responsible for the jet launching and also the spatial configuration of the physical components of the jet source, the enigma of jet formation is not yet completely understood. On the theoretical side, there has been a substantial advancement during the last decade from purely station-ary models to time-dependent simulations lead by the vast increase of computer power. Observers, on the other hand, do not yet have the instruments at hand in order to spatially resolve observe the very jet origin. It can be expected that also the next years will yield a substantial improvement on both tracks of astrophysical research. Three-dimensional magnetohydrodynamic simu-lations will improve our understanding regarding the jet-disk interrelation and the time-dependent character of jet formation, the generation of the magnetic field in the jet source, and the interaction of the jet with the ambient medium. Another step will be the combina-tion of radiation transfer computations and magnetohydrodynamic simulations providing a direct link to the observations. At the same time, a new generation of telescopes (VLT, NGST) in combination with new instrumental techniques (IR-interferometry) will lead to a "quantum leap" in jet observation, as the resolution will then be sufficient in order to zoom into the innermost region of jet formation.
Das Therapiemanagement bei Lipödem stellt auf Grund unzureichenden Wissensstandes in entscheidenden Aspekten eine besondere Herausforderung dar. Da die Pathogenese der Erkrankung nicht hinreichend geklärt ist und bislang kein pathognomonisches Diagnostikkriterium definiert wurde, beklagen viele Betroffene einen langjährigen Leidensweg bis zur Einleitung von Therapiemaßnahmen. Durch Steigerung der Awareness der Erkrankung in den letzten Jahren konnten die Intervalle bis zur korrekten Diagnose erfreulicherweise erheblich verkürzt werden. Obwohl die Zuordnung der Beschwerden zu einer klar definierten Erkrankung für viele Patientinnen eine Erleichterung ist, stellt die Erkenntnis über begrenzte Therapiemöglichkeiten häufig eine neuerliche Belastung dar.
Als Konsequenz der ungeklärten Pathogenese konnte bis dato keine kausale Therapie für das Lipödem definiert werden. Zu Beginn waren die Möglichkeiten konservativer Behandlungsstrategien nur eingeschränkt in den Rahmen eines allgemeingültigen Konzeptes involviert und insbesondere Limitationen nicht klar definiert. Obwohl in diversen Bereichen der Therapie weiterhin keine ausreichende Evidenz besteht, konnten durch eine systematische Aufarbeitung die grundsätzlichen Behandlungsoptionen in Relation zueinander gesetzt werden. Betroffene Patientinnen, sowie die verschiedenen in die Behandlung integrierte medizinische Disziplinen verfügen somit über einen grundsätzlichen Handlungsalgorithmus, deren Empfehlungen über einfache Rezeptierung von Lymphdrainage und Kompressionsbekleidung hinausgehen. Durch kritische Reflexion der geltenden Dogmata wurde ein interdisziplinärer Leitfaden vorgeschlagen, der auf nachvollziehbare Weise im Sinne eines Stufenschemas alle wesentlichen Therapiesäulen in einen allgemeingültigen Behandlungsplan einbindet.
Im vielschichten Management der Erkrankung verbleibt die operative Behandlung, die Liposuktion, allerdings häufig als „ultima ratio“ nach ausbleibender Linderung unter konservativen Therapiemaßnahmen. Die wesentliche Zielstellung der vorliegenden Arbeit konzentriert sich demnach auf die Optimierung des operativen Vorgehens in der Durchführung von Liposuktionen bei Patientinnen mit Lipödem und zeigt sowohl Grenzen der Indikationsstellung, als auch Potenzial des Behandlungserfolges im Langzeitverlauf auf. Langzeitergebnisse zeigen, dass die Liposuktion als sicherer Eingriff mit dem Potenzial einer nachhaltigen Symptomreduktion für Lipödem-Patientinnen angesehen werden kann. Betont werden soll zudem die Notwendigkeit der Verzahnung operativer Maßnahmen mit konservativen Therapien und somit die Integration der Liposuktion als sinnvolle Behandlungsalternative in ein klar umrissenes Therapiekonzept.
Methodisch greift die Arbeit auf insgesamt 10 Publikationen zurück. Die hier postulierte mehrzeitige Megaliposuktion zur Therapie des Lipödems, mit summierten Gesamtaspirationsvolumina über alle Eingriffe von bis zu 66.000 ml, konnte als evidenzbasiertes Therapieverfahren bestätigt und validiert werden. Die beschriebenen niedrigen Komplikationsraten sind unter Anderem Resultat einer differenzierten, individualisierten perioperativen Strategie. Neben der Berücksichtigung grundsätzlicher methodischer Prinzipien existieren allerdings vielfältige Variationen, deren Implikationen auf Komplikationsraten jeweils differenziert zu betrachten sind. Es existiert zwar kein Konsensus für ein allgemeingültiges Standardverfahren der Liposuktion, allerdings konnten zahlreiche Elemente im perioperativen Management definiert werden, die unabhängig von der verwendeten Operationstechnik einen potenziellen positiven Einfluss auf das Outcome haben. Obwohl die Liposuktion bei Lipödem somit zusammenfassend mittlerweile als sicheres Verfahren gelten kann, sind einige Aspekte weiterhin nicht abschließend geklärt. Hierbei stehen vor allem das Volumenmanagement und die standardisierte Festlegung des maximalen Aspirationsvolumens im Fokus.
Die Analyse verschiedener Kovariablen auf die Linderung Lipödem-assoziierter Symptome nach Liposuktion zeigt, dass Alter, Body-Mass-Index (BMI) und präoperatives Stadium der Erkrankung einen signifikanten Einfluss auf das postoperative Ergebnis haben und in der Planung des mehrzeitigen operativen Vorgehens berücksichtigt werden müssen. BMI- oder körpergewichtsabhängige Zielgrößen der Absaugvolumina waren als Prognosefaktor für das postoperative Outcome dagegen nicht relevant. Inwieweit dies möglicherweise an der Überschreitung des „notwendigen“ Volumengrenzwerts für adäquate Symptomlinderung durch reguläre Durchführung von Megaliposuktionen liegen könnte, oder ob dieser Parameter tatsächlich keinen Einfluss auf das Ergebnis nach Operation besitzt, konnte nicht abschließend geklärt werden.
Weiterhin konnte ein positiver Nutzen auf assoziierte Begleiterkrankungen bei Lipödem nachgewiesen werden. Das Spektrum der Behandlungsmethoden kann durch reguläre Integration der Liposuktion in das Therapieschema somit um eine nachhaltige Alternative sinnvoll ergänzt werden. Im Unterschied zur alleinigen konservativen Therapie kann hierdurch ein wesentlicher Schritt weg von der alleinigen symptomatischen Therapie gemacht werden. Zudem die vielfältige Symptomatik der diversen assoziierten Komorbiditäten zu berücksichtigen. Als Konsequenz und für die Notwendigkeit eines ganzheitlichen, interdisziplinären Therapieansatzes wäre der Terminus „Lipödem-Syndrom“ möglicherweise treffender und wird zur Diskussion gestellt.
Für ein gesondertes Patientenklientel wurden zudem basale Grundsätze im perioperativen Vorgehen differenziert aufgearbeitet. Lipödem-Patientinnen mit begleitendem von-Willebrand-Syndrom stellen im Hinblick auf Blutungskomplikationen eine außerordentliche Herausforderung dar. Die vorliegenden evidenzbasierten Empfehlungen zum Therapiemanagement dieser Patientinnen bei Eingriffen ähnlicher Risikoklassifizierung wurden systematisch aufgearbeitet und in Bezug zu den speziellen Anforderungen bei Megaliposuktionen gebracht. Das dabei erarbeitete Therapieschema wird die präoperative Detektion von Koagulopathien im Allgemeinen, sowie die perioperative Komplikationsrate bei von-Willebrand-Patientinnen im Speziellen zukünftig erheblich verbessern.
Zusammenfassend konnte somit ein allgemeingültiger Algorithmus für die moderne und langfristig erfolgreiche Therapie von Lipödem-Patientinnen mit besonderem Fokus auf die Megaliposuktion erarbeitet werden. Bei adäquatem perioperativem Management und Berücksichtigung der großen Volumenverschiebungen kann der Eingriff komplikationsarm und sicher durchgeführt werden. Nicht abschließend geklärt ist derzeit die Pathophysiologie der Erkrankung wobei eine immunologische Genese sowie die primäre Pathologie des Lymphgefäßsystems bzw. der Fett(vorläufer)zellen als Erklärungmodelle favorisiert werden. Die Entwicklung diagnostischer Biomarker sollte dabei verfolgt werden.
Potentiality of nanosized materials has been largely proved but a closer look shows that a significant percentage of this research is related to oxides and metals, while the number drastically drops for metallic ceramics, namely transition metal nitrides and metal carbides. The lack of related publications do not reflect their potential but rather the difficulties related to their synthesis as dense and defect-free structures, fundamental prerequisites for advanced mechanical applications.
The present habilitation work aims to close the gap between preparation and processing, indicating novel synthetic pathways for a simpler and sustainable synthesis of transition metal nitride (MN) and carbide (MC) based nanostructures and easier processing thereafter. In spite of simplicity and reliability, the designed synthetic processes allow the production of functional materials, with the demanded size and morphology.
The goal was achieved exploiting classical and less-classical precursors, ranging from common metal salts and molecules (e.g. urea, gelatin, agar, etc), to more exotic materials, such as leafs, filter paper and even wood. It was found that the choice of precursors and reaction conditions makes it possible to control chemical composition (going for instance from metal oxides to metal oxy-nitrides to metal nitrides, or from metal nitrides to metal carbides, up to quaternary systems), size (from 5 to 50 nm) and morphology (going from mere spherical nanoparticles to rod-like shapes, fibers, layers, meso-porous and hierarchical structures, etc). The nature of the mixed precursors also allows the preparation of metal nitrides/carbides based nanocomposites, thus leading to multifunctional materials (e.g. MN/MC@C, MN/MC@PILs, etc) but also allowing dispersion in liquid media. Control over composition, size and morphology is obtained with simple adjustment of the main route, but also coupling it with processes such as electrospin, aerosol spray, bio-templating, etc. Last but not least, the nature of the precursor materials also allows easy processing, including printing, coating, casting, film and thin layers preparation, etc).
The designed routes are, concept-wise, similar and they all start by building up a secondary metal ion-N/C precursor network, which converts, upon heat treatment, into an intermediate “glass”. This glass stabilizes the nascent nanoparticles during their nucleation and impairs their uncontrolled growth during the heat treatment (scheme 1). This way, one of the main problems related to the synthesis of MN/MC, i.e. the need of very high temperature, could also be overcome (from up to 2000°C, for classical synthesis, down to 700°C in the present cases). The designed synthetic pathways are also conceived to allow usage of non-toxic compounds and to minimize (or even avoid) post-synthesis purification, still bringing to phase pure and well-defined (crystalline) nanoparticles.
This research aids to simplify the preparation of MN/MC, making these systems now readily available in suitable amounts both for fundamental and applied science. The prepared systems have been tested (in some cases for the first time) in many different fields, e.g. battery (MnN0.43@C shown a capacity stabilized at a value of 230 mAh/g, with coulombic efficiencies close to 100%), as alternative magnetic materials (Fe3C nanoparticles were prepared with different size and therefore different magnetic behavior, superparamagnetic or ferromagnetic, showing a saturation magnetization value up to 130 emu/g, i.e. similar to the value expected for the bulk material), as filters and for the degradation of organic dyes (outmatching the performance of carbon), as catalysts (both as active phase but also as active support, leading to high turnover rate and, more interesting, to tunable selectivity). Furthermore, with this route, it was possible to prepare for the first time, to the best of our knowledge, well-defined and crystalline MnN0.43, Fe3C and Zn1.7GeN1.8O nanoparticles via bottom-up approaches.
Once the synthesis of these materials can be made straightforward, any further modification, combination, manipulation, is in principle possible and new systems can be purposely conceived (e.g. hybrids, nanocomposites, ferrofluids, etc).
Phonology limited
(2007)
Phonology Limited is a study of the areas of phonology where the application of optimality theory (OT) has previously been problematic. Evidence from a wide variety of phenomena in a wide variety of languages is presented to show that interactions involving more than just faithfulness and markedness are best analyzed as involving language-specific morphological constraints rather than universal phonological constraints. OT has proved to be a highly insightful and successful theory of linguistics in general and phonology in particular, focusing as it does on surface forms and treating the relationship between inputs and outputs as a form of conflict resolution. Yet there have also been a number of serious problems with the approach that have led some detractors to argue that OT has failed as a theory of generative grammar. The most serious of these problems is opacity, defined as a state of affairs where the grammatical output of a given input appears to violate more constraints than an ungrammatical competitor. It is argued that these problems disappear once language-specific morphological constraints are allowed to play a significant role in analysis. Specifically, a number of processes of Tiberian Hebrew traditionally considered opaque are reexamined and shown to be straightforwardly transparent, but crucially involving morphological constraints on form, such as a constraint requiring certain morphological forms to end with a syllabic trochee, or a constraint requiring paradigm uniformity with regard to the occurrence of fricative allophones of stop phonemes. Language-specific morphological constraints are also shown to play a role in allomorphy, where a lexeme is associated with more than one input; the constraint hierarchy then decides which input is grammatical in which context. For example, [ɨ]/[ə] and [u]/[ə] alternation found in some lexemes but not in others in Welsh is attributed to the presence of two inputs for the lexemes with the alternation. A novel analysis of the initial consonant mutations of the modern Celtic languages argues that mutated forms are separately listed inputs chosen in appropriate contexts by constraints on morphology and syntax, rather than being outputs that are phonologically unfaithful to their unmutated inputs. Finally, static irregularities and lexical exceptions are examined and shown to be attributable to language-specific morphological constraints. In American English, the distribution of tense and lax vowels is predictable in several contexts; however, in some contexts, the distributions of tense [ɔ] vs. lax [a] and of tense [æ] vs. lax [æ] are not as expected. It is shown that clusters of output-output faithfulness constraints create a pattern to which words are attracted, which however violates general phonological considerations. New words that enter the language first obey the general phonological considerations before being attracted into the language-specific exceptional pattern.
The role played by azobenzene polymers in the modern photonic, electronic and opto-mechanical applications cannot be underestimated. These polymers are successfully used to produce alignment layers for liquid crystalline fluorescent polymers in the display and semiconductor technology, to build waveguides and waveguide couplers, as data storage media and as labels in quality product protection. A very hot topic in modern research are light-driven artificial muscles based on azobenzene elastomers. The incorporation of azobenzene chromophores into polymer systems via covalent bonding or even by blending gives rise to a number of unusual effects under visible (VIS) and ultraviolet light irradiation. The most amazing effect is the inscription of surface relief gratings (SRGs) onto thin azobenzene polymer films. At least seven models have been proposed to explain the origin of the inscribing force but none of them describes satisfactorily the light induced material transport on the molecular level. In most models, to explain the mass transport over micrometer distances during irradiation at room temperature, it is necessary to assume a considerable degree of photoinduced softening, at least comparable with that at the glass transition. Contrary to this assumption, we have gathered a convincing evidence that there is no considerable softening of the azobenzene layers under illumination. Presently we can surely say that light induced softening is a very weak accompanying effect rather than a necessary condition for the formation of SRGs. This means that the inscribing force should be above the yield point of the azobenzene polymer. Hence, an appropriate approach to describe the formation and relaxation of SRGs is a viscoplastic theory. It was used to reproduce pulse-like inscription of SRGs as measured by VIS light scattering. At longer inscription times the VIS scattering pattern exhibits some peculiarities which can be explained by the appearance of a density grating that will be shown to arise due to the final compressibility of the polymer film. As a logical consequence of the aforementioned research, a thermodynamic theory explaining the light-induced deformation of free standing films and the formation of SRGs is proposed. The basic idea of this theory is that under homogeneous illumination an initially isotropic sample should stretch itself along the polarization direction to compensate the entropy decrease produced by the photoinduced reorientation of azobenzene chromophores. Finally, some ideas about further development of this controversial topic will be discussed.
Es ist schon seit längerer Zeit bekannt, dass nach Kontakt des Biomaterials mit der biologischen Umgebung bei Implantation oder extrakorporaler Wechselwirkung zunächst Proteine aus dem umgebenden Milieu adsorbiert werden, wobei die Oberflächeneigenschaften des Materials die Zusammensetzung der Proteinschicht und die Konformation der darin enthaltenden Proteine determinieren. Die nachfolgende Wechselwirkung von Zellen mit dem Material wird deshalb i.d.R. von der Adsorbatschicht vermittelt. Der Einfluss der Oberflächen auf die Zusammensetzung und Konformation der Proteine und die nachfolgende Wechselwirkung mit Zellen ist von besonderem Interesse, da einerseits eine Aussage über die Anwendbarkeit ermöglicht wird, andererseits Erkenntnisse über diese Zusammenhänge für die Entwicklung neuer Materialien mit verbesserter Biokompatibilität genutzt werden können. In der vorliegenden Habilitationsschrift wurde deshalb der Einfluss der Zusammensetzung von Polymeren bzw. von deren Oberflächeneigenschaften auf die Adsorption von Proteinen, den Aktivitätszustand der plasmatischen Gerinnung und die Adhäsion von Zellen untersucht. Dabei wurden auch Möglichkeiten zur Beeinflussung dieser Vorgänge über eine Veränderung der Volumenzusammensetzung oder durch Oberflächenmodifikationen von Biomaterialien vorgestellt. Erkenntnisse aus diesen Arbeiten konnten für die Entwicklung von Membranen für Biohybrid-Organe genutzt werden.
In this thesis, a collection of studies is presented that advance research on complex food webs in several directions. Food webs, as the networks of predator-prey interactions in ecosystems, are responsible for distributing the resources every organism needs to stay alive. They are thus central to our understanding of the mechanisms that support biodiversity, which in the face of increasing severity of anthropogenic global change and accelerated species loss is of highest importance, not least for our own well-being.
The studies in the first part of the thesis are concerned with general mechanisms that determine the structure and stability of food webs. It is shown how the allometric scaling of metabolic rates with the species' body masses supports their persistence in size-structured food webs (where predators are larger than their prey), and how this interacts with the adaptive adjustment of foraging efforts by consumer species to create stable food webs with a large number of coexisting species. The importance of the master trait body mass for structuring communities is further exemplified by demonstrating that the specific way the body masses of species engaging in empirically documented predator-prey interactions affect the predator's feeding rate dampens population oscillations, thereby helping both species to survive. In the first part of the thesis it is also shown that in order to understand certain phenomena of population dynamics, it may be necessary to not only take the interactions of a focal species with other species into account, but to also consider the internal structure of the population. This can refer for example to different abundances of age cohorts or developmental stages, or the way individuals of different age or stage interact with other species.
Building on these general insights, the second part of the thesis is devoted to exploring the consequences of anthropogenic global change on the persistence of species. It is first shown that warming decreases diversity in size-structured food webs. This is due to starvation of large predators on higher trophic levels, which suffer from a mismatch between their respiration and ingestion rates when temperature increases. In host-parasitoid networks, which are not size-structured, warming does not have these negative effects, but eutrophication destabilises the systems by inducing detrimental population oscillations. In further studies, the effect of habitat change is addressed. On the level of individual patches, increasing isolation of habitat patches has a similar effect as warming, as it leads to decreasing diversity due to the extinction of predators on higher trophic levels. In this case it is caused by dispersal mortality of smaller and therefore less mobile species on lower trophic levels, meaning that an increasing fraction of their biomass production is lost to the inhospitable matrix surrounding the habitat patches as they become more isolated. It is further shown that increasing habitat isolation desynchronises population oscillations between the patches, which in itself helps species to persist by dampening fluctuations on the landscape level. However, this is counteracted by an increasing strength of local population oscillations fuelled by an indirect effect of dispersal mortality on the feeding interactions. Last, a study is presented that introduces a novel mechanism for supporting diversity in metacommunities. It builds on the self-organised formation of spatial biomass patterns in the landscape, which leads to the emergence of spatio-temporally varying selection pressures that keep local communities permanently out of equilibrium and force them to continuously adapt. Because this mechanism relies on the spatial extension of the metacommunity, it is also sensitive to habitat change.
In the third part of the thesis, the consequences of biodiversity for the functioning of ecosystems are explored. The studies focus on standing stock biomass, biomass production, and trophic transfer efficiency as ecosystem functions. It is first shown that increasing the diversity of animal communities increases the total rate of intra-guild predation. However, the total biomass stock of the animal communities increases nevertheless, which also increases their exploitative pressure on the underlying plant communities. Despite this, the plant communities can maintain their standing stock biomass due to a shift of the body size spectra of both animal and plant communities towards larger species with a lower specific respiration rate. In another study it is further demonstrated that the generally positive relationship between diversity and the above mentioned ecosystem functions becomes steeper when not only the feeding interactions but also the numerous non-trophic interactions (like predator interference or competition for space) between the species of an ecosystem are taken into account. Finally, two studies are presented that demonstrate the power of functional diversity as explanatory variable. It is interpreted as the range spanned by functional traits of the species that determine their interactions. This approach allows to mechanistically understand how the ecosystem functioning of food webs with multiple trophic levels is affected by all parts of the food web and why a high functional diversity is required for efficient transportation of energy from primary producers to the top predators.
The general discussion draws some synthesising conclusions, e.g. on the predictive power of ecosystem functioning to explain diversity, and provides an outlook on future research directions.
Ecce figura
(2023)
Worüber wir reden, wenn wir von Figuren reden, ist eine komplexe Fragestellung, die unterschiedliche Disziplinen berührt. Mit Erich Auerbachs figura/Mimesis-Projekt wurde die interdiszplinäre Forschung dieses Begriffs initiiert. Ob Literatur-, Bild- oder Wissensgeschichte – die Präsenz und Aktualität von figura in der romanistischen und komparatistischen Forschung bezeugt ein anhaltendes Interesse an der Theoriearbeit zwischen Theologie, Philosophie, Literatur- und Kunstwissenschaft. Allerdings fehlt bislang eine grundlegende methodologische Reflexion, die die interdisziplinären Aspekte gleichrangig berücksichtigt und zu einer gemeinsamen Arbeit am Begriff vereinigt.
Dieses Versäumnis zu beheben, ist Aufgabe der vorliegenden Arbeit. Ausgehend von Erich Auerbach, Walter Benjamin und Hannah Arendt verfolgt die Monographie in vergleichenden Konstellationen von der Antike bis in die Moderne die literatur- und kunsthistorischen, theologischen und philosophischen Spuren von figura, die zu einer Methode der literaturphilosophischen Figuralogie ausgebaut werden.
Ecce figura versteht sich als ein Kompendium interdisziplinärer Begriffsgeschichte zwischen Literatur, Philosophie und Theologie, das dazu einlädt, in neuen Konstellationen gelesen und erweitert zu werden.
Earthquake faults interact with each other in many different ways and hence earthquakes cannot be treated as individual independent events. Although earthquake interactions generally lead to a complex evolution of the crustal stress field, it does not necessarily mean that the earthquake occurrence becomes random and completely unpredictable. In particular, the interplay between earthquakes can rather explain the occurrence of pronounced characteristics such as periods of accelerated and depressed seismicity (seismic quiescence) as well as spatiotemporal earthquake clustering (swarms and aftershock sequences). Ignoring the time-dependence of the process by looking at time-averaged values – as largely done in standard procedures of seismic hazard assessment – can thus lead to erroneous estimations not only of the activity level of future earthquakes but also of their spatial distribution. Therefore, it exists an urgent need for applicable time-dependent models. In my work, I aimed at better understanding and characterization of the earthquake interactions in order to improve seismic hazard estimations. For this purpose, I studied seismicity patterns on spatial scales ranging from hydraulic fracture experiments (meter to kilometer) to fault system size (hundreds of kilometers), while the temporal scale of interest varied from the immediate aftershock activity (minutes to months) to seismic cycles (tens to thousands of years). My studies revealed a number of new characteristics of fluid-induced and stress-triggered earthquake clustering as well as precursory phenomena in earthquake cycles. Data analysis of earthquake and deformation data were accompanied by statistical and physics-based model simulations which allow a better understanding of the role of structural heterogeneities, stress changes, afterslip and fluid flow. Finally, new strategies and methods have been developed and tested which help to improve seismic hazard estimations by taking the time-dependence of the earthquake process appropriately into account.
We theoretically discuss the interaction of neutral particles (atoms, molecules) with surfaces in the regime where it is mediated by the electromagnetic field. A thorough characterization of the field at sub-wavelength distances is worked out, including energy density spectra and coherence functions. The results are applied to typical situations in integrated atom optics, where ultracold atoms are coupled to a thermal surface, and to single molecule probes in near field optics, where sub-wavelength resolution can be achieved.
Weltweit sind fast 40 % der Bevölkerung übergewichtig und die Prävalenz von Adipositas, Insulinresistenz und den resultierenden Folgeerkrankungen wie dem Metabolischen Syndrom und Typ-2-Diabetes steigt rapide an. Als häufigste Ursachen werden diätetisches Fehlverhalten und mangelnde Bewegung angesehen. Die nicht-alkoholische Fettlebererkrankung (NAFLD), deren Hauptcharakteristikum die exzessive Akkumulation von Lipiden in der Leber ist, korreliert mit dem Body Mass Index (BMI). NAFLD wird als hepatische Manifestation des Metabolischen Syndroms angesehen und ist inzwischen die häufigste Ursache für Leberfunktionsstörungen. Die Erkrankung umfasst sowohl die benigne hepatische Steatose (Fettleber) als auch die progressive Form der nicht-alkoholischen Steatohepatitis (NASH), bei der die Steatose von Entzündung und Fibrose begleitet ist. Die Ausbildung einer NASH erhöht das Risiko, ein hepatozelluläres Karzinom (HCC) zu entwickeln und kann zu irreversibler Leberzirrhose und terminalem Organversagen führen. Nahrungsbestandteile wie Cholesterol und Fett-reiche Diäten werden als mögliche Faktoren diskutiert, die den Übergang einer einfachen Fettleber zur schweren Verlaufsform der Steatohepatitis / NASH begünstigen. Eine Ausdehnung des Fettgewebes wird von Insulinresistenz und einer niedrig-gradigen chronischen Entzündung des Fettgewebes begleitet. Neben Endotoxinen aus dem Darm gelangen Entzündungsmediatoren aus dem Fettgewebe zur Leber. Als Folge werden residente Makrophagen der Leber, die Kupfferzellen, aktiviert, die eine Entzündungsantwort initiieren und weitere pro-inflammatorische Mediatoren freisetzen, zu denen Chemokine, Cytokine und Prostanoide wie Prostaglandin E2 (PGE2) gehören. In dieser Arbeit soll aufgeklärt werden, welchen Beitrag PGE2 an der Ausbildung von Insulinresistenz, hepatischer Steatose und Entzündung im Rahmen von Diät-induzierter NASH im komplexen Zusammenspiel mit der Regulation der Cytokin-Produktion und anderen Co-Faktoren wie Hyperinsulinämie und Hyperlipidämie hat. In murinen und humanen Makrophagen-Populationen wurde untersucht, welche Faktoren die Bildung von PGE2 fördern und wie PGE2 die Entzündungsantwort aktivierter Makrophagen reguliert. In primären Hepatozyten der Ratte sowie in isolierten humanen Hepatozyten und Zelllinien wurde der Einfluss von PGE2 allein und in Kombination mit Cytokinen, deren Bildung durch PGE2 beeinflusst werden kann, auf die Insulin-abhängige Regulation des Glucose- und Lipid-stoffwechsels untersucht. Um den Einfluss von PGE2 im komplexen Zusammenspiel der Zelltypen in der Leber und im Gesamtorganismus zu erfassen, wurden Mäuse, in denen die PGE2-Synthese durch die Deletion der mikrosomalen PGE-Synthase 1 (mPGES1) vermindert war, mit einer NASH-induzierenden Diät gefüttert. In Lebern von Patienten mit NASH oder in Mäusen mit Diät-induzierter NASH war die Expression der PGE2-synthetisierenden Enzyme Cyclooxygenase 2 (COX2) und mPGES1 sowie die Bildung von PGE2 im Vergleich zu gesunden Kontrollen gesteigert und korrelierte mit dem Schweregrad der Lebererkrankung. In primären Makrophagen aus den Spezies Mensch, Maus und Ratte sowie in humanen Makrophagen-Zelllinien war die Bildung pro-inflammatorischer Mediatoren wie Chemokinen, Cytokinen und Prostaglandinen wie PGE2 verstärkt, wenn die Zellen mit Endotoxinen wie Lipopolysaccharid (LPS), Fettsäuren wie Palmitinsäure, Cholesterol und Cholesterol-Kristallen oder Insulin, das als Folge der kompensatorischen Hyperinsulinämie bei Insulinresistenz verstärkt freigesetzt wird, inkubiert wurden. Insulin steigerte dabei synergistisch mit LPS oder Palmitinsäure die Synthese von PGE2 sowie der anderen Entzündungsmediatoren wie Interleukin (IL) 8 und IL-1β. PGE2 reguliert die Entzündungsantwort: Neben der Induktion der eigenen Synthese-Enzyme verstärkte PGE2 die Expression der Immunzell-rekrutierenden Chemokine IL-8 und (C-C-Motiv)-Ligand 2 (CCL2) sowie die der pro-inflammatorischen Cytokine IL-1β und IL-6 in Makrophagen und kann so zur Verstärkung der Entzündungsreaktion beitragen. Außerdem förderte PGE2 die Bildung von Oncostatin M (OSM) und OSM induzierte in einer positiven Rückkopplungsschleife die Expression der PGE2-synthetisierenden Enzyme. Andererseits hemmte PGE2 die basale und LPS-vermittelte Bildung des potenten pro-inflammatorischen Cytokins Tumornekrosefaktor α (TNFα) und kann so die Entzündungsreaktion abschwächen. In primären Hepatozyten der Ratte und humanen Hepatozyten beeinträchtigte PGE2 direkt die Insulin-abhängige Aktivierung der Insulinrezeptor-Signalkette zur Steigerung der Glucose-Verwertung, in dem es durch Signalketten, die den verschiedenen PGE2-Rezeptoren nachgeschaltet sind, Kinasen wie ERK1/2 und IKKβ aktivierte und eine inhibierende Serin-Phosphorylierung der Insulinrezeptorsubstrate bewirkte. PGE2 verstärkte außerdem die IL-6- oder OSM-vermittelte Insulinresistenz und Steatose in primären Hepatozyten der Ratte. Die Wirkung von PGE2 im Gesamtorganismus sollte in Mäusen mit Diät-induzierter NASH untersucht werden. Die Fütterung einer Hochfett-Diät mit Schmalz als Fettquelle, das vor allem gesättigte Fettsäuren enthält, verursachte Fettleibigkeit, Insulinresistenz und eine hepatische Steatose in Wildtyp-Mäusen. In Tieren, die eine Hochfett-Diät mit Sojaöl als Fettquelle, das vor allem (ω-6)-mehrfach-ungesättigte Fettsäuren (PUFAs) enthält, oder eine Niedrigfett-Diät mit Cholesterol erhielten, war lediglich eine hepatische Steatose nachweisbar, jedoch keine verstärkte Gewichtszunahme im Vergleich zu Geschwistertieren, die eine Standard-Diät bekamen. Im Gegensatz dazu verursachte die Fütterung einer Hochfett-Diät mit PUFA-reichem Sojaöl als Fettquelle in Kombination mit Cholesterol sowohl Fettleibigkeit und Insulinresistenz als auch hepatische Steatose mit Hepatozyten-Hypertrophie, lobulärer Entzündung und beginnender Fibrose in Wildtyp-Mäusen. Diese Tiere spiegelten alle klinischen und histologischen Parameter der humanen NASH im Metabolischen Syndrom wider. Nur die Kombination von hohen Mengen ungesättigter Fettsäuren aus Sojaöl und Cholesterol in der Nahrung führte zu einer exzessiven Akkumulation des Cholesterols und der Bildung von Cholesterol-Kristallen in den Hepatozyten, die zur Schädigung der Mitochondrien, schwerem oxidativem Stress und schließlich zum Absterben der Zellen führten. Als Konsequenz phagozytieren Kupfferzellen die Zelltrümmer der Cholesterol-überladenen Hepatozyten, werden dadurch aktiviert, setzen Chemokine, Cytokine und PGE2 frei, die die Entzündungsreaktion verstärken und die Infiltration von weiteren Immunzellen initiieren können und verursachen so eine Progression zur Steatohepatitis (NASH). Die Deletion der mikrosomalen PGE-Synthase 1 (mPGES1), dem induzierbaren Enzym der PGE2-Synthese aus Cyclooxygenase-abhängigen Vorstufen, reduzierte die Diät-abhängige Bildung von PGE2 in der Leber. Die Fütterung der NASH-induzierenden Diät verursachte in Wildtyp- und mPGES1-defizienten Mäusen eine ähnliche Fettleibigkeit und Zunahme der Fettmasse sowie die Ausbildung von hepatischer Steatose mit Entzündung und Fibrose (NASH) im histologischen Bild. In mPGES1-defizienten Mäusen waren jedoch Parameter für die Infiltration von Entzündungszellen und die Diät-abhängige Schädigung der Leber im Vergleich zu Wildtyp-Tieren erhöht, was sich auch in einer stärkeren Diät-induzierten systemischen Insulinresistenz widerspiegelte. Die Bildung des pro-inflammatorischen und pro-apoptotischen Cytokins TNFα war in mPGES1-defizienten Mäusen durch die Aufhebung der negativen Rückkopplungshemmung verstärkt, was einen gesteigerten Diät-induzierten Zelluntergang gestresster Lipid-überladener Hepatozyten und eine nach-geschaltete Entzündungsantwort zur Folge hatte. Zusammenfassend wurde unter den gewählten Versuchsbedingungen in vivo eine anti-inflammatorische Rolle von PGE2 verifiziert, da das Prostanoid vor allem indirekt durch die Hemmung der TNFα-vermittelten Entzündungsreaktion die Schädigung der Leber, die Verstärkung der Entzündung und die Ausbildung von Insulinresistenz im Rahmen der Diät-abhängigen Fettlebererkrankung abschwächte.
This book is concerned with the diachronic development of selected topic and focus markers in Spanish, Portuguese and French. On the one hand, it focuses on the development of these structures from their relational meaning to their topic-/ focus-marking meaning, and on the other hand, it is concerned with their current form und use. Thus, Romance topic and focus markers – such as sp. en cuanto a, pt. a propósito de, fr. au niveau de or sentence-initial sp. Lo que as well as clefts and pseudo-clefts – are investigated from a quantitative and qualitative perspective. The author argues that topic markers have procedural meaning and that their function is bound to their syntactic position. An important contribution of this study is the fact that real linguistic evidence (in the form of data from various corpora) is analyzed instead of operating with constructed examples.
This habilitation thesis includes seven case studies that examine climate variability during the past 3.5 million years from different temporal and spatial perspectives. The main geographical focus is on the climatic events of the of the African and Asian monsoonal system, the North Atlantic as well as the Arctic Ocean. The results of this study are based on marine and terrestrial climate archives obtained by sedimentological and geochemical methods, and subsequently analyzed by various statistical methods.
The results herein presented results provide a picture of the climatic background conditions of past cold and warm periods, the sensitivity of past climatic climate phases in relation to changes in the atmospheric carbon dioxide content, and the tight linkage between the low and high latitude climate system. Based on the results, it is concluded that a warm background climate state strongly influenced and/or partially reversed the linear relationships between individual climate processes that are valid today. Also, the driving force of the low latitudes for climate variability of the high latitudes is emphasized in the present work, which is contrary to the conventional view that the global climate change of the past 3.5 million years was predominantly controlled by the high latitude climate variability. Furthermore, it is found that on long geologic time scales (>1000 years to millions of years), solar irradiance variability due to changes in the Earth-Sun-Moon System may have increased the sensitivity of low and high latitudes to Influenced changes in atmospheric carbon dioxide.
Taken together, these findings provide new insights into the sensitivity of past climate phases and provide new background conditions for numerical models, that predict future climate change.
Bischöfe im Frankenreich waren einflussreiche politische Akteure, die im Laufe des 9. Jahrhunderts ein gelehrtes Wissen vom eigenen Amt entwickelten. Spiegelungen dieses Wissens über das Wesen des Bischofsamtes finden sich in vielen Texten, die meisten stammen aus Westfranken. Offen ist bislang jedoch, welche Relevanz dieses Wissen und das bischöfliche Standesbewusstsein hatten – ist es als normativer Referenzrahmen von anderen politisch relevanten Ständen anerkannt? Wie entwickelt es sich über die Umbruchzeit des 10. Jahrhunderts in der post-karolingischen Zeit und beginnenden Kirchenreform? Diesen Fragen widmet sich das Buch durch eine Untersuchung von Bischofsabsetzungen in Westfranken im 9. und 10. Jahrhundert und durch eine Analyse des Bischofsbildes in monastischen wie bischöflichen Kreisen im 10. und frühen 11. Jahrhundert. So kann ein differenziertes Bild der Wahrnehmung des Bischofsamtes und der konkrete Umgang mit dem Wissen vom Bischofsamt in verschiedenen Kontexten gezeichnet werden.
Understanding the formation of stars in galaxies is central to much of modern astrophysics. For several decades it has been thought that the star formation process is primarily controlled by the interplay between gravity and magnetostatic support, modulated by neutral-ion drift. Recently, however, both observational and numerical work has begun to suggest that supersonic interstellar turbulence rather than magnetic fields controls star formation. This review begins with a historical overview of the successes and problems of both the classical dynamical theory of star formation, and the standard theory of magnetostatic support from both observational and theoretical perspectives. We then present the outline of a new paradigm of star formation based on the interplay between supersonic turbulence and self-gravity. Supersonic turbulence can provide support against gravitational collapse on global scales, while at the same time it produces localized density enhancements that allow for collapse on small scales. The efficiency and timescale of stellar birth in Galactic gas clouds strongly depend on the properties of the interstellar turbulent velocity field, with slow, inefficient, isolated star formation being a hallmark of turbulent support, and fast, efficient, clustered star formation occurring in its absence. After discussing in detail various theoretical aspects of supersonic turbulence in compressible self-gravitating gaseous media relevant for star forming interstellar clouds, we explore the consequences of the new theory for both local star formation and galactic scale star formation. The theory predicts that individual star-forming cores are likely not quasi-static objects, but dynamically evolving. Accretion onto these objects will vary with time and depend on the properties of the surrounding turbulent flow. This has important consequences for the resulting stellar mass function. Star formation on scales of galaxies as a whole is expected to be controlled by the balance between gravity and turbulence, just like star formation on scales of individual interstellar gas clouds, but may be modulated by additional effects like cooling and differential rotation. The dominant mechanism for driving interstellar turbulence in star-forming regions of galactic disks appears to be supernovae explosions. In the outer disk of our Milky Way or in low-surface brightness galaxies the coupling of rotation to the gas through magnetic fields or gravity may become important.
Computational cosmology
(2008)
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.