Refine
Year of publication
Document Type
- Habilitation Thesis (103) (remove)
Keywords
- Biophysik (3)
- biophysics (3)
- Datenanalyse (2)
- Oberfläche (2)
- Resonanzenergietransfer (2)
- Selbstorganisation (2)
- Stochastische Prozesse (2)
- Zelladhäsion (2)
- carbon nitride (2)
- cell adhesion (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Biochemie und Biologie (17)
- Institut für Chemie (15)
- Institut für Geowissenschaften (12)
- Institut für Umweltwissenschaften und Geographie (6)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Ernährungswissenschaft (5)
- Institut für Romanistik (4)
- Department Psychologie (2)
- Extern (2)
Eco-physiological processes are expressing the interaction of organisms within an environmental context of their habitat and their degree of adaptation, level of resistance as well as the limits of life in a changing environment. The present study focuses on observations achieved by methods used in this scientific discipline of “Ecophysiology” and to enlarge the scientific context in a broader range of understanding with universal character. The present eco-physiological work is building the basis for classifying and exploring the degree of habitability of another planet like Mars by a bio-driven experimentally approach. It offers also new ways of identifying key-molecules which are playing a specific role in physiological processes of tested organisms to serve as well as potential biosignatures in future space exploration missions with the goal to search for life. This has important implications for the new emerging scientific field of Astrobiology. Astrobiology addresses the study of the origin, evolution, distribution and future of life in the universe. The three fundamental questions which are hidden behind this definition are: how does life begin and evolve? Is there life beyond Earth and, if so, how can we detect it? What is the future of life on Earth and in the universe? It means that this multidisciplinary field encompasses the search for habitable environments in our Solar System and habitable planets outside our Solar System. It comprises the search for the evidence of prebiotic chemistry and life on Mars and other bodies in our Solar System like the icy moons of the Jovian and Saturnian system, laboratory and field research into the origins and early evolution of life on Earth, and studies of the potential for life to adapt to challenges on Earth and in space. For this purpose an integrated research strategy was applied, which connects field research, laboratory research allowing planetary simulation experiments with investigation enterprises performed in space (particularly performed in the low Earth Orbit.
This cumulative habilitation thesis presents new work on the systematics, paleoecology, and evolution of antelopes and other large mammals, focusing mainly on the late Miocene to Pleistocene terrestrial fossil record of Africa and Arabia. The studies included here range from descriptions of new species to broad-scale analyses of diversification and community evolution in large mammals over millions of years. A uniting theme is the evolution, across both temporal and spatial scales, of the environments and faunas that characterize modern African savannas today. One conclusion of this work is that macroevolutionary changes in large mammals are best characterized at regional (subcontinental to continental) and long-term temporal scales. General views of evolution developed on records that are too restricted in spatial and temporal extent are likely to ascribe too much influence to local or short-lived events. While this distinction in the scale of analysis and interpretation may seem trivial, it is challenging to implement given the geographically and temporally uneven nature of the fossil record, and the difficulties of synthesizing spatially and temporally dispersed datasets. This work attempts to do just that, bringing together primary fossil discoveries from eastern Africa to Arabia, from the Miocene to the Pleistocene, and across a wide range of (mainly large mammal) taxa. The end result is support for hypotheses stressing the impact of both climatic and biotic factors on long-term faunal change, and a more geographically integrated view of evolution in the African fossil record.
Continental rift systems open up unique possibilities to study the geodynamic system of our planet: geodynamic localization processes are imprinted in the morphology of the rift by governing the time-dependent activity of faults, the topographic evolution of the rift or by controlling whether a rift is symmetric or asymmetric. Since lithospheric necking localizes strain towards the rift centre, deformation structures of previous rift phases are often well preserved and passive margins, the end product of continental rifting, retain key information about the tectonic history from rift inception to continental rupture.
Current understanding of continental rift evolution is based on combining observations from active rifts with data collected at rifted margins. Connecting these isolated data sets is often accomplished in a conceptual way and leaves room for subjective interpretation. Geodynamic forward models, however, have the potential to link individual data sets in a quantitative manner, using additional constraints from rock mechanics and rheology, which allows to transcend previous conceptual models of rift evolution. By quantifying geodynamic processes within continental rifts, numerical modelling allows key insight to tectonic processes that operate also in other plate boundary settings, such as mid ocean ridges, collisional mountain chains or subduction zones.
In this thesis, I combine numerical, plate-tectonic, analytical, and analogue modelling approaches, whereas numerical thermomechanical modelling constitutes the primary tool. This method advanced rapidly during the last two decades owing to dedicated software development and the availability of massively parallel computer facilities. Nevertheless, only recently the geodynamical modelling community was able to capture 3D lithospheric-scale rift dynamics from onset of extension to final continental rupture.
The first chapter of this thesis provides a broad introduction to continental rifting, a summary of the applied rift modelling methods and a short overview of previews studies. The following chapters, which constitute the main part of this thesis feature studies on plate boundary dynamics in two and three dimension followed by global scale analyses (Fig. 1).
Chapter II focuses on 2D geodynamic modelling of rifted margin formation. It highlights the formation of wide areas of hyperextended crustal slivers via rift migration as a key process that affected many rifted margins worldwide. This chapter also contains a study of rift velocity evolution, showing that rift strength loss and extension velocity are linked through a dynamic feed-back. This process results in abrupt accelerations of the involved plates during rifting illustrating for the first time that rift dynamics plays a role in changing global-scale plate motions. Since rift velocity affects key processes like faulting, melting and lower crustal flow, this study also implies that the slow-fast velocity evolution should be imprinted in rifted margin structures.
Chapter III relies on 3D Cartesian rift models in order to investigate various aspects of rift obliquity. Oblique rifting occurs if the extension direction is not orthogonal to the rift trend. Using 3D lithospheric-scale models from rift initialisation to breakup I could isolate a characteristic evolution of dominant fault orientations. Further work in Chapter III addresses the impact of rift obliquity on the strength of the rift system. We illustrate that oblique rifting is mechanically preferred over orthogonal rifting, because the brittle yielding requires a lower tectonic force. This mechanism elucidates rift competition during South Atlantic rifting, where the more oblique Equatorial Atlantic Rift proceeded to breakup while the simultaneously active but less oblique West African rift system became a failed rift. Finally this Chapter also investigates the impact of a previous rift phase on current tectonic activity in the linkage area of the Kenyan with Ethiopian rift. We show that the along strike changes in rift style are not caused by changes in crustal rheology. Instead the rift linkage pattern in this area can be explained when accounting for the thinned crust and lithosphere of a Mesozoic rift event.
Chapter IV investigates rifting from the global perspective. A first study extends the oblique rift topic of the previous chapter to global scale by investigating the frequency of oblique rifting during the last 230 million years. We find that approximately 70% of all ocean-forming rift segments involved an oblique component of extension where obliquities exceed 20°. This highlights the relevance of 3D approaches in modelling, surveying, and interpretation of many rifted margins. In a final study, we propose a link between continental rift activity, diffuse CO2 degassing and Mesozoic/Cenozoic climate changes. We used recent CO2 flux measurements in continental rifts to estimate worldwide rift-related CO2 release, which we based on the global extent of rifts through time. The first-order correlation to paleo-atmospheric CO2 proxy data suggests that rifts constitute a major element of the global carbon cycle.
The anatomically modern human Homo sapiens sapiens is distinguished by a high adaptability in physiology, physique and behaviour in short term changing environmental conditions. Since our environmental factors are constantly changing because of anthropogenic influences, the question arises as to how far we have an impact on the human phenotype in the very sensitive growth phase in children and adolescents. Growth and development of all children and adolescents follow a universal and typical pattern. This pattern has evolved as the result of trade-offs in the 6-7 million years of human evolution. This typically human growth pattern differs from that of other long-living social primate species. It can be divided into different biological age stages, with specific biological, cognitive and socio-cultural signs. Phenotypic plasticity is the ability of an organism to react to an internal or external environmental input with a change in the form, state, and movement rate of activity (West-Eberhard 2003). The plasticity becomes visible and measurable particularly when, in addition to the normal variability of the phenotypic characteristics within a population, the manifestation of this plasticity changes within a relatively short time. The focus of the present work is the comparison of age-specific dimensional changes. The basic of the presented studies are more than 75,000 anthropometric data-sets of children and adolescence from 1980 up today and historical data of height available in scientific literature. Due to reduced daily physical activity, today's 6-18 year-olds have lower values of pelvic and elbow breadths. The observed increase in body height can be explained by hierarchies in social networks of human societies, contrary to earlier explanations (influence of nutrition, good living conditions and genetics). A shift towards a more feminine fat distribution pattern in boys and girls is parallel to the increase in chemicals in our environment that can affect the hormone system. Changing environmental conditions can have selective effects over generations so that that genotype becomes increasingly prevalent whose individuals have a higher progeny rate than other individuals in this population. Those then form the phenotype which allows optimum adaptation to the changes of the environmental conditions. Due to the slow patterns of succession and the low progeny rate (Hawkes et al. 1998), fast visible in the phenotype due to changes in the genotype of a population are unlikely to occur in the case of Homo sapiens sapiens within short time. In the data sets on which the presented investigations are based, such changes appear virtually impossible. The study periods cover 5-30 to max.100 years (based on data from the body height from historical data sets).
Die klassische Physik/Chemie unterscheidet zwischen drei Bindungstypen: Der kovalenten Bindung, der ionischen Bindung und der metallischen Bindung. Moleküle untereinander werden hingegen durch schwache Wechselwirkungen zusammen gehalten, sie sind trotz ihrer schwachen Kräfte weniger verstanden, aber dabei nicht weniger wichtig. In zukunftsweisenden Gebieten wie der Nanotechnologie, der Supramolekularen Chemie und Biochemie sind sie von elementarer Bedeutung.
Um schwache, intermolekulare Wechselwirkungen zu beschreiben, vorauszusagen und zu verstehen, sind sie zunächst theoretisch zu erfassen. Hierzu gehören verschiedene quantenchemische Methoden, die in dieser Arbeit vorgestellt, verglichen, weiterentwickelt und schließlich auch exemplarisch auf Problemstellungen in der Chemie angewendet werden. Aufbauend auf einer Hierarchie von Methoden unterschiedlicher Genauigkeit werden sie für diese Ziele eingesetzt, ausgearbeitet und kombiniert.
Berechnet wird die Elektronenstruktur, also die Verteilung und Energie von Elektronen, die im Wesentlichen die Atome zusammen halten. Da Ungenauigkeiten von der Beschreibung der Elektronenstruktur von den verwendeten Methoden abhängen, kann man die Effekte detailliert untersuchen, sie beschreiben und darauf aufbauend weiter entwickeln, um sie anschließend an verschiedenen Modellen zu testen. Die Geschwindigkeit der Berechnungen mit modernen Computern ist eine wesentliche, zu berücksichtigende Komponente, da im Allgemeinen die Genauigkeit mit der Rechenzeit exponentiell steigt, und die damit an die Grenzen der Möglichkeiten stoßen muss.
Die genaueste der verwendeten Methoden basiert auf der Coupled-Cluster-Theorie, die sehr gute Voraussagen ermöglicht. Für diese wird eine sogenannte spektroskopische Genauigkeit mit Abweichungen von wenigen Wellenzahlen erzielt, was Vergleiche mit experimentellen Daten zeigen. Eine Möglichkeit zur Näherung von hochgenauen Methoden basiert auf der Dichtefunktionaltheorie: Hier wurde das „Boese-Martin for Kinetics“ (BMK)-Funktional entwickelt, dessen Funktionalform sich in vielen nach 2010 veröffentlichten Dichtefunktionalen wiederfindet.
Mit Hilfe der genaueren Methoden lassen sich schließlich semiempirische Kraftfelder zur Beschreibung intermolekularer Wechselwirkungen für individuelle Systeme parametrisieren, diese benötigen weit weniger Rechenzeit als die Methoden, die auf der genauen Berechnung der Elektronenstruktur von Molekülen beruhen.
Für größere Systeme lassen sich auch verschiedene Methoden kombinieren. Dabei wurden Einbettungsverfahren verfeinert und mit neuen methodischen Ansätzen vorgeschlagen. Sie verwenden sowohl die symmetrieadaptierte Störungstheorie als auch die quantenchemische Einbettung von Fragmenten in größere, quantenchemisch berechnete Systeme.
Die Entwicklungen neuer Methoden beziehen ihren Wert im Wesentlichen durch deren Anwendung:
In dieser Arbeit standen zunächst die Wasserstoffbrücken im Vordergrund. Sie zählen zu den stärkeren intermolekularen Wechselwirkungen und sind nach wie vor eine Herausforderung. Im Gegensatz dazu sind van-der-Waals Wechselwirkungen relativ einfach durch Kraftfelder zu beschreiben. Deshalb sind viele der heute verwendeten Methoden für Systeme, in denen Wasserstoffbrücken dominieren, vergleichsweise schlecht.
Eine Untersuchung molekularer Aggregate mit Auswirkungen intermolekularer Wechselwirkungen auf die Schwingungsfrequenzen von Molekülen schließt sich an. Dabei wird auch über die sogenannte starrer-Rotor-harmonischer-Oszillator-Näherung hinausgegangen.
Eine weitreichende Anwendung behandelt Adsorbate, hier die von Molekülen auf ionischen/metallischen Oberflächen. Sie können mit ähnlichen Methoden behandelt werden wie die intermolekularen Wechselwirkungen, und sind mit speziellen Einbettungsverfahren sehr genau zu beschreiben. Die Resultate dieser theoretischen Berechnungen stimulierten eine Neubewertung der bislang bekannten experimentellen Ergebnisse.
Molekulare Kristalle sind ein äußerst wichtiges Forschungsgebiet. Sie werden durch schwache Wechselwirkungen zusammengehalten, die von van-der-Waals Kräften bis zu Wasserstoffbrücken reichen. Auch hier wurden neuentwickelte Methoden eingesetzt, die eine interessante, mindestens ebenso genaue Alternative zu den derzeit gängigen Methoden darstellen.
Von daher sind die entwickelten Methoden, als auch deren Anwendung äußerst vielfältig. Die behandelten Berechnungen der Elektronenstruktur erstrecken sich von den sogenannten post-Hartree-Fock-Methoden über den Einsatz der Dichtefunktionaltheorie bis zu semiempirischen Kraftfeldern und deren Kombinationen. Die Anwendung reicht von einzelnen Molekülen in der Gasphase über die Adsorption auf Oberflächen bis zum molekularen Festkörper.
Ferroelectrets are internally charged polymer foams or cavity-containing polymer-_lm systems that combine large piezoelectricity with mechanical flexibility and elastic compliance. The term “ferroelectret” was coined based on the fact that it is a space-charge electret that also shows ferroic behavior. In this thesis, comprehensive work on ferroelectrets, and in particular on their preparation, their charging, their piezoelectricity and their applications is reported.
For industrial applications, ferroelectrets with well-controlled distributions or even uniform values of cavity size and cavity shape and with good thermal stability of the piezoelectricity are very desirable. Several types of such ferroelectrets are developed using techniques such as straightforward thermal lamination, sandwiching sticky templates with electret films, and screen printing. In particular, uoroethylenepropylene (FEP) _lm systems with tubular-channel openings, prepared by means of the thermal lamination technique, show piezoelectric d33 coefficients of up to 160 pC/N after charging through dielectric barrier discharges (DBDs) . For samples charged at suitable elevated temperatures, the piezoelectricity is stable at temperatures of at least 130°C. These preparation methods are easy to implement at laboratory or industrial scales, and are quite flexible in terms of material selection and cavity geometry design. Due to the uniform and well-controlled cavity structures, samples are also very suitable for fundamental studies on ferroelectrets.
Charging of ferroelectrets is achieved via a series of dielectric barrier discharges (DBDs) inside the cavities. In the present work, the DBD charging process is comprehensively studied by means of optical, electrical and electro-acoustic methods. The spectrum of the transient light from the DBDs in cellular polypropylene (PP) ferroelectrets directly confirms the ionization of molecular nitrogen, and allows the determination of the electric field in the discharge. Detection of the light emission reveals not only DBDs under high applied voltage but also back discharges when the applied voltage is reduced to sufficiently low values. Back discharges are triggered by the internally deposited charges, as the breakdown inside the cavities is controlled by the sum of the applied electric field and the electric field of the deposited charges. The remanent effective polarization is determined by the breakdown strength of the gas-filled cavities. These findings form the basis of more efficient charging techniques for ferroelectrets such as charging with high-pressure air, thermal poling and charging assisted by gas exchange. With the proposed charging strategies, the charging efficiency of ferroelectrets can be enhanced significantly.
After charging, the cavities can be considered as man-made macroscopic dipoles whose direction can be reversed by switching the polarity of the applied voltage. Polarization-versus-electric-field (P(E)) hysteresis loops in ferroelectrets are observed by means of an electro-acoustic method combined with dielectric resonance spectroscopy. P(E) hysteresis loops in ferrroelectrets are also obtained by more direct measurements using a modified Sawyer-Tower circuit. Hysteresis loops prove the ferroic behavior of ferroelectrets. However, repeated switching of the macroscopic dipoles involves complex physico-chemical processes. The DBD charging process generates a cold plasma with numerous active species and thus modifies the inner polymer surfaces of the cavities. Such treatments strongly affect the chargeability of the cavities. At least for cellular PP ferroelectrets, repeated DBDs in atmospheric conditions lead to considerable fatigue of the effective polarization and of the resulting piezoelectricity.
The macroscopic dipoles in ferroelectrets are highly compressible, and hence the piezoelectricity is essentially the primary effect. It is found that the piezoelectric d33 coefficient is proportional to the polarization and the elastic compliance of the sample, providing hints for developing materials with higher piezoelectric sensitivity in the future. Due to their outstanding electromechanical properties, there has been constant interest in the application of ferroelectrets. The antiresonance frequencies (fp) of ferroelectrets are sensitive to the boundary conditions during measurement. A tubular-channel FEP ferroelectret is conformably attached to a self-organized minimum-energy dielectric elastomer actuator (DEA). It turns out that the antiresonance frequency (fp) of the ferroelectret film changes noticeably with the bending angle of the DEA. Therefore, the actuation of DEAs can be used to modulate the fp value of ferroelectrets, but fp can also be exploited for in-situ diagnosis and for precise control of the actuation of the DEA. Combination of DEAs and ferroelectrets opens up various new possibilities for application.
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
Die Arbeit stellt die Funktionsweise und den Erwerb der deutschen Groß- und Kleinschreibung auf theoretischer und empirischer Grundlage dar. Den Ausgangspunkt bildet eine textpragmatische Verallgemeinerung bisheriger graphematischer Ansätze, die zu einem übergreifenden Modell des Majuskelgebrauchs im Deutschen erweitert werden und dabei auch nicht-orthografische Teilbereiche einschließen (Versalsatz, Kapitälchen, Binnenmajuskel etc.).
Im empirischen Teil der Arbeit werden die orthografischen Leistungsdaten von ca. 5.700 Probanden verschiedener Altersklassen (4. Klasse bis Erwachsenenbildung) untersucht und zu einem allgemeinen Erwerbsmodell der Groß- und Kleinschreibung ausgebaut. Mit Hilfe neuronaler Netzwerksimulationen werden unterschiedliche Lernertypen unterschieden und Diskontinuitäten im Kompetenzerwerb nachgewiesen, die auf qualitative Strategiewechsel in der Ontogenese hindeuten. Den Abschluss bilden orthografiedidaktische und rechtschreibdiagnostische Reflexionen der Daten.
Quantitative thermodynamic and geochemical modeling is today applied in a variety of geological environments from the petrogenesis of igneous rocks to the oceanic realm. Thermodynamic calculations are used, for example, to get better insight into lithosphere dynamics, to constrain melting processes in crust and mantle as well as to study fluid-rock interaction. The development of thermodynamic databases and computer programs to calculate equilibrium phase diagrams have greatly advanced our ability to model geodynamic processes from subduction to orogenesis. However, a well-known problem is that despite its broad application the use and interpretation of thermodynamic models applied to natural rocks is far from straightforward. For example, chemical disequilibrium and/or unknown rock properties, such as fluid activities, complicate the application of equilibrium thermodynamics.
One major aspect of the publications presented in this Habilitationsschrift are new approaches to unravel dynamic and chemical histories of rocks that include applications to chemically open system behaviour. This approach is especially important in rocks that are affected by element fractionation due to fractional crystallisation and fluid loss during dehydration reactions. Furthermore, chemically open system behaviour has also to be considered for studying fluid-rock interaction processes and for extracting information from compositionally zoned metamorphic minerals. In this Habilitationsschrift several publications are presented where I incorporate such open system behaviour in the forward models by incrementing the calculations and considering changing reacting rock compositions during metamorphism. I apply thermodynamic forward modelling incorporating the effects of element fractionation in a variety of geodynamic and geochemical applications in order to better understand lithosphere dynamics and mass transfer in solid rocks.
In three of the presented publications I combine thermodynamic forward models with trace element calculations in order to enlarge the application of geochemical numerical forward modeling. In these publications a combination of thermodynamic and trace element forward modeling is used to study and quantify processes in metamorphic petrology at spatial scales from µm to km. In the thermodynamic forward models I utilize Gibbs energy minimization to quantify mineralogical changes along a reaction path of a chemically open fluid/rock system. These results are combined with mass balanced trace element calculations to determine the trace element distribution between rock and melt/fluid during the metamorphic evolution. Thus, effects of mineral reactions, fluid-rock interaction and element transport in metamorphic rocks on the trace element and isotopic composition of minerals, rocks and percolating fluids or melts can be predicted.
One of the included publications shows that trace element growth zonations in metamorphic garnet porphyroblasts can be used to get crucial information about the reaction path of the investigated sample. In order to interpret the major and trace element distribution and zoning patterns in terms of the reaction history of the samples, we combined thermodynamic forward models with mass-balance rare earth element calculations. Such combined thermodynamic and mass-balance calculations of the rare earth element distribution among the modelled stable phases yielded characteristic zonation patterns in garnet that closely resemble those in the natural samples. We can show in that paper that garnet growth and trace element incorporation occurred in near thermodynamic equilibrium with matrix phases during subduction and that the rare earth element patterns in garnet exhibit distinct enrichment zones that fingerprint the minerals involved in the garnet-forming reactions.
In two of the presented publications I illustrate the capacities of combined thermodynamic-geochemical modeling based on examples relevant to mass transfer in subduction zones. The first example focuses on fluid-rock interaction in and around a blueschist-facies shear zone in felsic gneisses, where fluid-induced mineral reactions and their effects on boron (B) concentrations and isotopic compositions in white mica are modeled. In the second example, fluid release from a subducted slab and associated transport of B and variations in B concentrations and isotopic compositions in liberated fluids and residual rocks are modeled. I show that, combined with experimental data on elemental partitioning and isotopic fractionation, thermodynamic forward modeling unfolds enormous capacities that are far from exhausted.
In my publications presented in this Habilitationsschrift I compare the modeled results to geochemical data of natural minerals and rocks and demonstrate that the combination of thermodynamic and geochemical models enables quantification of metamorphic processes and insights into element cycling that would have been unattainable so far.
Thus, the contributions to the science community presented in this Habilitatonsschrift concern the fields of petrology, geochemistry, geochronology but also ore geology that all use thermodynamic and geochemical models to solve various problems related to geo-materials.
In der vorliegenden Arbeit werden verschiedene Experimente zur Untersuchung der elektrischen Leitfähigkeit von Sutur- und Kollisionszonen im Zusammenhang diskutiert, um die Möglichkeiten, die die moderne Magnetotellurik (MT) für das Abbilden fossiler tektonischer Systeme bietet, aufzuzeigen. Aus den neuen hochauflösenden Abbildern der elektrischen Leitfähigkeit können potentielle Gemeinsamkeiten verschiedener tektonischer Einheiten abgeleitet werden. Innerhalb der letzten Dekade haben sich durch die Weiterentwicklung der Messgeräte und der Auswerte- und Interpretationsmethoden völlig neue Perspektiven für die geodynamische Tiefensondierung ergeben. Dies wird an meinen Forschungsarbeiten deutlich, die ich im Rahmen von Projekten selbst eingeworben und am Deutschen GeoForschungsZentrum Potsdam durchgeführt habe. In Tabelle A habe ich die in dieser Arbeit berücksichtigten Experimente aufgeführt, die in den letzten Jahren entweder als Array- oder als Profilmessungen durchgeführt wurden. Für derart große Feldexperimente benötigt man ein Team von WissenschaftlerInnen, StudentInnen und technischem Personal. Das bedeutet aber auch, dass von mir betreute StudentInnen und DoktorandInnen Teilaspekte dieser Experimente in Form von Diplom-, Bachelor- und Mastersarbeiten oder Promotionsschriften verarbeitet haben. Bei anschließender Veröffentlichung der Arbeiten habe ich als Co-Autor mitgewirkt. Die beiliegenden Veröffentlichungen enthalten eine Einführung in die Methode der Magnetotellurik und gegebenenfalls die Beschreibung neu entwickelter Methoden. Eine allgemeine Darstellung der theoretischen Grundlagen der Magnetotellurik findet man zum Beispiel in Chave & Jones (2012); Simpson & Bahr (2005); Kaufman & Keller (1981); Nabighian (1987); Weaver (1994). Die Arbeit beinhaltet zudem ein Glossar, in dem einige Begriffe und Abkürzungen erklärt werden. Ich habe mich entschieden, Begriffe, für die es keine adäquate deutsche Übersetzung gibt oder die im Deutschen eine andere oder missverständliche Bedeutung bekommen, auf Englisch in der Arbeit zu belassen. Sie sind durch eine kursive Schreibweise gekennzeichnet.
Poly(Ionic Liquid)s
(2015)
The ecohydrological transfers, interactions and degradation arising from high-intensity storm events
(2015)
Biological materials, in addition to having remarkable physical properties, can also change shape and volume. These shape and volume changes allow organisms to form new tissue during growth and morphogenesis, as well as to repair and remodel old tissues. In addition shape or volume changes in an existing tissue can lead to useful motion or force generation (actuation) that may even still function in the dead organism, such as in the well known example of the hygroscopic opening or closing behaviour of the pine cone. Both growth and actuation of tissues are mediated, in addition to biochemical factors, by the physical constraints of the surrounding environment and the architecture of the underlying tissue. This habilitation thesis describes biophysical studies carried out over the past years on growth and swelling mediated shape changes in biological systems. These studies use a combination of theoretical and experimental tools to attempt to elucidate the physical mechanisms governing geometry controlled tissue growth and geometry constrained tissue swelling. It is hoped that in addition to helping understand fundamental processes of growth and morphogenesis, ideas stemming from such studies can also be used to design new materials for medicine and robotics.
Potentiality of nanosized materials has been largely proved but a closer look shows that a significant percentage of this research is related to oxides and metals, while the number drastically drops for metallic ceramics, namely transition metal nitrides and metal carbides. The lack of related publications do not reflect their potential but rather the difficulties related to their synthesis as dense and defect-free structures, fundamental prerequisites for advanced mechanical applications.
The present habilitation work aims to close the gap between preparation and processing, indicating novel synthetic pathways for a simpler and sustainable synthesis of transition metal nitride (MN) and carbide (MC) based nanostructures and easier processing thereafter. In spite of simplicity and reliability, the designed synthetic processes allow the production of functional materials, with the demanded size and morphology.
The goal was achieved exploiting classical and less-classical precursors, ranging from common metal salts and molecules (e.g. urea, gelatin, agar, etc), to more exotic materials, such as leafs, filter paper and even wood. It was found that the choice of precursors and reaction conditions makes it possible to control chemical composition (going for instance from metal oxides to metal oxy-nitrides to metal nitrides, or from metal nitrides to metal carbides, up to quaternary systems), size (from 5 to 50 nm) and morphology (going from mere spherical nanoparticles to rod-like shapes, fibers, layers, meso-porous and hierarchical structures, etc). The nature of the mixed precursors also allows the preparation of metal nitrides/carbides based nanocomposites, thus leading to multifunctional materials (e.g. MN/MC@C, MN/MC@PILs, etc) but also allowing dispersion in liquid media. Control over composition, size and morphology is obtained with simple adjustment of the main route, but also coupling it with processes such as electrospin, aerosol spray, bio-templating, etc. Last but not least, the nature of the precursor materials also allows easy processing, including printing, coating, casting, film and thin layers preparation, etc).
The designed routes are, concept-wise, similar and they all start by building up a secondary metal ion-N/C precursor network, which converts, upon heat treatment, into an intermediate “glass”. This glass stabilizes the nascent nanoparticles during their nucleation and impairs their uncontrolled growth during the heat treatment (scheme 1). This way, one of the main problems related to the synthesis of MN/MC, i.e. the need of very high temperature, could also be overcome (from up to 2000°C, for classical synthesis, down to 700°C in the present cases). The designed synthetic pathways are also conceived to allow usage of non-toxic compounds and to minimize (or even avoid) post-synthesis purification, still bringing to phase pure and well-defined (crystalline) nanoparticles.
This research aids to simplify the preparation of MN/MC, making these systems now readily available in suitable amounts both for fundamental and applied science. The prepared systems have been tested (in some cases for the first time) in many different fields, e.g. battery (MnN0.43@C shown a capacity stabilized at a value of 230 mAh/g, with coulombic efficiencies close to 100%), as alternative magnetic materials (Fe3C nanoparticles were prepared with different size and therefore different magnetic behavior, superparamagnetic or ferromagnetic, showing a saturation magnetization value up to 130 emu/g, i.e. similar to the value expected for the bulk material), as filters and for the degradation of organic dyes (outmatching the performance of carbon), as catalysts (both as active phase but also as active support, leading to high turnover rate and, more interesting, to tunable selectivity). Furthermore, with this route, it was possible to prepare for the first time, to the best of our knowledge, well-defined and crystalline MnN0.43, Fe3C and Zn1.7GeN1.8O nanoparticles via bottom-up approaches.
Once the synthesis of these materials can be made straightforward, any further modification, combination, manipulation, is in principle possible and new systems can be purposely conceived (e.g. hybrids, nanocomposites, ferrofluids, etc).
Biological materials have ever been used by humans because of their remarkable properties. This is surprising since the materials are formed under physiological conditions and with commonplace constituents. Nature thus not only provides us with inspiration for designing new materials but also teaches us how to use soft molecules to tune interparticle and external forces to structure and assemble simple building blocks into functional entities. Magnetotactic bacteria and their chain of magnetosomes represent a striking example of such an accomplishment where a very simple living organism controls the properties of inorganics via organics at the nanometer-scale to form a single magnetic dipole that orients the cell in the Earth magnetic field lines. My group has developed a biological and a bio-inspired research based on these bacteria. My research, at the interface between chemistry, materials science, physics, and biology focuses on how biological systems synthesize, organize and use minerals. We apply the design principles to sustainably form hierarchical materials with controlled properties that can be used e.g. as magnetically directed nanodevices towards applications in sensing, actuating, and transport. In this thesis, I thus first present how magnetotactic bacteria intracellularly form magnetosomes and assemble them in chains. I developed an assay, where cells can be switched from magnetic to non-magnetic states. This enabled to study the dynamics of magnetosome and magnetosome chain formation. We found that the magnetosomes nucleate within minutes whereas chains assembles within hours. Magnetosome formation necessitates iron uptake as ferrous or ferric ions. The transport of the ions within the cell leads to the formation of a ferritin-like intermediate, which subsequently is transported and transformed within the magnetosome organelle in a ferrihydrite-like precursor. Finally, magnetite crystals nucleate and grow toward their mature dimension. In addition, I show that the magnetosome assembly displays hierarchically ordered nano- and microstructures over several levels, enabling the coordinated alignment and motility of entire populations of cells. The magnetosomes are indeed composed of structurally pure magnetite. The organelles are partly composed of proteins, which role is crucial for the properties of the magnetosomes. As an example, we showed how the protein MmsF is involved in the control of magnetosome size and morphology. We have further shown by 2D X-ray diffraction that the magnetosome particles are aligned along the same direction in the magnetosome chain. We then show how magnetic properties of the nascent magnetosome influence the alignment of the particles, and how the proteins MamJ and MamK coordinate this assembly. We propose a theoretical approach, which suggests that biological forces are more important than physical ones for the chain formation. All these studies thus show how magnetosome formation and organization are under strict biological control, which is associated with unprecedented material properties. Finally, we show that the magnetosome chain enables the cells to find their preferred oxygen conditions if the magnetic field is present. The synthetic part of this work shows how the understanding of the design principles of magnetosome formation enabled me to perform biomimetic synthesis of magnetite particles within the highly desired size range of 25 to 100 nm. Nucleation and growth of such particles are based on aggregation of iron colloids termed primary particles as imaged by cryo-high resolution TEM. I show how additives influence magnetite formation and properties. In particular, MamP, a so-called magnetochrome proteins involved in the magnetosome formation in vivo, enables the in vitro formation of magnetite nanoparticles exclusively from ferrous iron by controlling the redox state of the process. Negatively charged additives, such as MamJ, retard magnetite nucleation in vitro, probably by interacting with the iron ions. Other additives such as e.g. polyarginine can be used to control the colloidal stability of stable-single domain sized nanoparticles. Finally, I show how we can “glue” magnetic nanoparticles to form propellers that can be actuated and swim with the help of external magnetic fields. We propose a simple theory to explain the observed movement. We can use the theoretical framework to design experimental conditions to sort out the propellers depending on their size and effectively confirm this prediction experimentally. Thereby, we could image propellers with size down to 290 nm in their longer dimension, much smaller than what perform so far.
Hydrothermal carbonisation
(2013)
The world’s appetite for energy is producing growing quantities of CO2, a pollutant that contributes to the warming of the planet and which currently cannot be removed or stored in any significant way. Other natural reserves are also being devoured at alarming rates and current assessments suggest that we will need to identify alternative sources in the near future. With the aid of materials chemistry it should be possible to create a world in which energy use needs not be limited and where usable energy can be produced and stored wherever it is needed, where we can minimize and remediate emissions as new consumer products are created, whilst healing the planet and preventing further disruptive and harmful depletion of valuable mineral assets. In achieving these aims, the creation of new and very importantly greener industries and new sustainable pathways are crucial. In all of the aforementioned applications, new materials based on carbon, ideally produced via inexpensive, low energy consumption methods, using renewable resources as precursors, with flexible morphologies, pore structures and functionalities, are increasingly viewed as ideal candidates to fulfill these goals. The resulting materials should be a feasible solution for the efficient storage of energy and gases. At the end of life, such materials ideally must act to improve soil quality and to act as potential CO2 storage sinks. This is exactly the subject of this habilitation thesis: an alternative technology to produce carbon materials from biomass in water using low carbonisation temperatures and self-generated pressures. This technology is called hydrothermal carbonisation. It has been developed during the past five years by a group of young and talented researchers working under the supervision of Dr. Titirici at the Max-Planck Institute of Colloids and Interfaces and it is now a well-recognised methodology to produce carbon materials with important application in our daily lives. These applications include electrodes for portable electronic devices, filters for water purification, catalysts for the production of important chemicals as well as drug delivery systems and sensors.
The habilitation thesis covers theoretical investigations on light-induced processes in molecules. The study is focussed on changes of the molecular electronic structure and geometry, caused either by photoexcitation in the event of a spectroscopic analysis, or by a selective control with shaped laser pulses. The applied and developed methods are predominantly based on quantum chemistry as well as on electron and nuclear quantum dynamics, and in parts on molecular dynamics. The studied scientific problems deal with stereoisomerism and the question of how to either switch or distinguish chiral molecules using laser pulses, and with the essentials for the simulation of the spectroscopic response of biochromophores, in order to unravel their photophysics. The accomplished findings not only explain experimental results and extend existing approaches, but also contribute significantly to the basic understanding of the investigated light-driven molecular processes. The main achievements can be divided in three parts: First, a quantum theory for an enantio- and diastereoselective or, in general, stereoselective laser pulse control was developed and successfully applied to influence the chirality of molecular switches. The proposed axially chiral molecules possess different numbers of "switchable" stable chiral conformations, with one particular switch featuring even a true achiral "off"-state which allows to enantioselectively "turn on" its chirality. Furthermore, surface mounted chiral molecular switches with several well-defined orientations were treated, where a newly devised highly flexible stochastic pulse optimization technique provides high stereoselectivity and efficiency at the same time, even for coupled chirality-changing degrees of freedom. Despite the model character of these studies, the proposed types of chiral molecular switches and, all the more, the developed basic concepts are generally applicable to design laser pulse controlled catalysts for asymmetric synthesis, or to achieve selective changes in the chirality of liquid crystals or in chiroptical nanodevices, implementable in information processing or as data storage. Second, laser-driven electron wavepacket dynamics based on ab initio calculations, namely time-dependent configuration interaction, was extended by the explicit inclusion of magnetic field-magnetic dipole interactions for the simulation of the qualitative and quantitative distinction of enantiomers in mass spectrometry by means of circularly polarized ultrashort laser pulses. The developed approach not only allows to explain the origin of the experimentally observed influence of the pulse duration on the detected circular dichroism in the ion yield, but also to predict laser pulse parameters for an optimal distinction of enantiomers by ultrashort shaped laser pulses. Moreover, these investigations in combination with the previous ones provide a fundamental understanding of the relevance of electric and magnetic interactions between linearly or non-linearly polarized laser pulses and (pro-)chiral molecules for either control by enantioselective excitation or distinction by enantiospecific excitation. Third, for selected light-sensitive biological systems of central importance, like e.g. antenna complexes of photosynthesis, simulations of processes which take place during and after photoexcitation of their chromophores were performed, in order to explain experimental (spectroscopic) findings as well as to understand the underlying photophysical and photochemical principles. In particular, aspects of normal mode mixing due to geometrical changes upon photoexcitation and their impact on (time-dependent) vibronic and resonance Raman spectra, as well as on intramolecular energy redistribution were addressed. In order to explain unresolved experimental findings, a simulation program for the calculation of vibronic and resonance Raman spectra, accounting for changes in both vibrational frequencies and normal modes, was created based on a time-dependent formalism. In addition, the influence of the biochemical environment on the electronic structure of the chromophores was studied by electrostatic interactions and mechanical embedding using hybrid quantum-classical methods. Environmental effects were found to be of importance, in particular, for the excitonic coupling of chromophores in light-harvesting complex II. Although the simulations for such highly complex systems are still restricted by various approximations, the improved approaches and obtained results have proven to be important contributions for a better understanding of light-induced processes in biosystems which also adds to efforts of their artificial reproduction.
Die Koloniale Karibik
(2012)
Werden nicht in der Karibik des 19. Jahrhunderts Phänomene und Prozesse vorweg-genommen, die heute erst ins Bewusstsein gelangen? Der Blick auf die kaleidoskopartige Welt der Karibik über literarische und kulturelle Transprozesse in jener Epoche erlaubt völlig neue Einsichten in die frühen Prozesse der kulturellen Globalisierung. Rassistische Diskurse, etablierte Modelle „weißer“ Abolitionisten, Erinnerungspolitiken und die bisher kaum wahrgenommene Rolle der haitianischen Revolution verbinden sich zu einem Amalgam, das unser gängiges Konzept einer genuin westlichen Moderne in Frage stellt.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
Gewässer werden traditionellerweise als abgeschlossene Ökosysteme gesehen, und insbeson¬dere das Zirkulieren von Wasser und Nährstoffen im Pelagial von Seen wird als Beispiel dafür angeführt. Allerdings wurden in der jüngeren Vergangenheit wichtige Verknüpfungen des Freiwasserkörpers von Gewässern aufgezeigt, die einerseits mit dem Benthal und andererseits mit dem Litoral, der terrestrischen Uferzone und ihrem Einzugsgebiet bestehen. Dadurch hat in den vergangen Jahren die horizontale und vertikale Konnektivität der Gewässerökosysteme erhöhtes wissenschaftliches Interesse auf sich gezogen, und damit auch die ökologischen Funktionen des Gewässergrunds (Benthal) und der Uferzonen (Litoral). Aus der neu beschriebenen Konnektivität innerhalb und zwischen diesen Lebensräumen ergeben sich weitreichende Konsequenzen für unser Bild von der Funktionalität der Gewässer. In der vorliegenden Habilitationsschrift wird am Beispiel von Fließgewässern und Seen des nordostdeutschen Flachlandes eine Reihe von internen und externen funktionalen Verknüpfungen in den horizontalen und vertikalen räumlichen Dimensionen aufgezeigt. Die zugrunde liegenden Untersuchungen umfassten zumeist sowohl abiotische als auch biologische Variablen, und umfassten thematisch, methodisch und hinsichtlich der Untersuchungsgewässer ein breites Spektrum. Dabei wurden in Labor- und Feldexperimenten sowie durch quantitative Feldmes¬sungen ökologischer Schlüsselprozesse wie Nährstoffretention, Kohlenstoffumsatz, extrazellu¬läre Enzymaktivität und Ressourcenweitergabe in Nahrungsnetzen (mittels Stabilisotopen¬methode) untersucht. In Bezug auf Fließgewässer wurden dadurch wesentliche Erkenntnisse hinsichtlich der Wirkung einer durch Konnekticität geprägten Hydromorphologie auf die die aquatische Biodiversität und die benthisch-pelagische Kopplung erbracht, die wiederum einen Schlüsselprozess darstellt für die Retention von in der fließenden Welle transportierten Stoffen, und damit letztlich für die Produktivität eines Flussabschnitts. Das Litoral von Seen wurde in Mitteleuropa jahrzehntelang kaum untersucht, so dass die durchgeführten Untersuchungen zur Gemeinschaftsstruktur, Habitatpräferenzen und Nahrungs¬netzverknüpfungen des eulitoralen Makrozoobenthos grundlegend neue Erkenntnisse erbrach¬ten, die auch unmittelbar in Ansätze zur ökologischen Bewertung von Seeufern gemäß EG-Wasserrahmenrichtlinie eingehen. Es konnte somit gezeigt werden, dass die Intensität sowohl die internen als auch der externen ökologischen Konnektivität durch die Hydrologie und Morphologie der Gewässer sowie durch die Verfügbarkeit von Nährstoffen wesentlich beeinflusst wird, die auf diese Weise vielfach die ökologische Funktionalität der Gewässer prägen. Dabei trägt die vertikale oder horizontale Konnektivität zur Stabilisierung der beteiligten Ökosysteme bei, indem sie den Austausch ermöglicht von Pflanzennährstoffen, von Biomasse sowie von migrierenden Organismen, wodurch Phasen des Ressourcenmangels überbrückt werden. Diese Ergebnisse können im Rahmen der Bewirtschaftung von Gewässern dahingehend genutzt werden, dass die Gewährleistung horizontaler und vertikaler Konnektivität in der Regel mit räumlich komplexeren, diverseren, zeitlich und strukturell resilienteren sowie leistungsfähi¬geren Ökosystemen einhergeht, die somit intensiver und sicherer nachhaltig genutzt werden können. Die Nutzung einer kleinen Auswahl von Ökosystemleistungen der Flüsse und Seen durch den Menschen hat oftmals zu einer starken Reduktion der ökologischen Konnektivität, und in der Folge zu starken Verlusten bei anderen Ökosystemleistungen geführt. Die Ergebnisse der dargestellten Forschungen zeigen auch, dass die Entwicklung und Implementierung von Strategien zum integrierten Management von komplexen sozial-ökologischen Systemen wesentlich unterstützt werden kann, wenn die horizontale und vertikale Konnektivität gezielt entwickelt wird.
Parsability approaches of several grammar formalisms generating also non-context-free languages are explored. Chomsky grammars, Lindenmayer systems, grammars with controlled derivations, and grammar systems are treated. Formal properties of these mechanisms are investigated, when they are used as language acceptors. Furthermore, cooperating distributed grammar systems are restricted so that efficient deterministic parsing without backtracking becomes possible. For this class of grammar systems, the parsing algorithm is presented and the feature of leftmost derivations is investigated in detail.
Die vorliegende Arbeit versammelt zwei einleitende Kapitel und zehn Essays, die sich als kritisch-konstruktive Beiträge zu einem "erlebenden Verstehen" (Buck) von Physik lesen lassen. Die traditionelle Anlage von Schulphysik zielt auf eine systematische Darstellung naturwissenschaftlichen Wissens, das dann an ausgewählten Beispielen angewendet wird: Schulexperimente beweisen die Aussagen der Systematik (oder machen sie wenigstens plausibel), ausgewählte Phänomene werden erklärt. In einem solchen Rahmen besteht jedoch leicht die Gefahr, den Bezug zur Lebenswirklichkeit oder den Interessen der Schüler zu verlieren. Diese Problematik ist seit mindestens 90 Jahren bekannt, didaktische Antworten - untersuchendes Lernen, Kontextualisierung, Schülerexperimente etc. - adressieren allerdings eher Symptome als Ursachen. Naturwissenschaft wird dadurch spannend, dass sie ein spezifisch investigatives Weltverhältnis stiftet: man müsste gleichsam nicht Wissen, sondern "Fragen lernen" (und natürlich auch, wie Antworten gefunden werden...). Doch wie kann dergleichen auf dem Niveau von Schulphysik aussehen, was für einen theoretischen Rahmen kann es hier geben? In den gesammelten Arbeiten wird einigen dieser Spuren nachgegangen: Der Absage an das zu modellhafte Denken in der phänomenologischen Optik, der Abgrenzung formal-mathematischen Denkens gegen wirklichkeitsnähere Formen naturwissenschaftlicher Denkbewegungen und Evidenz, dem Potential alternativer Interpretationen von "Physikunterricht", der Frage nach dem "Verstehen" u.a. Dabei werden nicht nur Bezüge zum modernen bildungstheoretischen Paradigma der Kompetenz sichtbar, sondern es wird auch versucht, eine ganze Reihe konkrete (schul-)physikalische Beispiele dafür zu geben, was passiert, wenn nicht schon gewusste Antworten Thema werden, sondern Expeditionen, die sich der physischen Welt widmen: Die Schlüsselbegriffe des Fachs, die Methoden der Datenerhebung und Interpretation, die Such- und Denkbewegungen kommen dabei auf eine Weise zur Sprache, die sich nicht auf die Fachsystematik abstützen möchte, sondern diese motivieren, konturieren und verständlich machen will.
Earthquake faults interact with each other in many different ways and hence earthquakes cannot be treated as individual independent events. Although earthquake interactions generally lead to a complex evolution of the crustal stress field, it does not necessarily mean that the earthquake occurrence becomes random and completely unpredictable. In particular, the interplay between earthquakes can rather explain the occurrence of pronounced characteristics such as periods of accelerated and depressed seismicity (seismic quiescence) as well as spatiotemporal earthquake clustering (swarms and aftershock sequences). Ignoring the time-dependence of the process by looking at time-averaged values – as largely done in standard procedures of seismic hazard assessment – can thus lead to erroneous estimations not only of the activity level of future earthquakes but also of their spatial distribution. Therefore, it exists an urgent need for applicable time-dependent models. In my work, I aimed at better understanding and characterization of the earthquake interactions in order to improve seismic hazard estimations. For this purpose, I studied seismicity patterns on spatial scales ranging from hydraulic fracture experiments (meter to kilometer) to fault system size (hundreds of kilometers), while the temporal scale of interest varied from the immediate aftershock activity (minutes to months) to seismic cycles (tens to thousands of years). My studies revealed a number of new characteristics of fluid-induced and stress-triggered earthquake clustering as well as precursory phenomena in earthquake cycles. Data analysis of earthquake and deformation data were accompanied by statistical and physics-based model simulations which allow a better understanding of the role of structural heterogeneities, stress changes, afterslip and fluid flow. Finally, new strategies and methods have been developed and tested which help to improve seismic hazard estimations by taking the time-dependence of the earthquake process appropriately into account.
This professorial dissertation thesis collects several empirical studies on tax distribution and tax reform in Germany. Chapter 2 deals with two studies on effective income taxation, based on representative micro data sets from tax statistics. The first study analyses the effective income taxation at the individual level, in particular with respect to the top incomes. It is based on an integrated micro data file of household survey data and income tax statistics, which captures the entire income distribution up to the very top. Despite substantial tax base erosion and reductions of top tax rates, the German personal income tax has remained effectively progressive. The distribution of the tax burden is highly concentrated and the German economic elite is still taxed relatively heavily, even though the effective tax rate for this group has significantly declined. The second study of Chapter 2 highlights the effective income taxation of functional income sources, such as labor income, business and capital income, etc. Using income tax micro data and microsimulation models, we allocate the individual income tax liability to the respective income sources, according to different apportionment schemes accounting for losses. We find that the choice of the apportionment scheme markedly affects the tax shares of income sources and implicit tax rates, in particular those of capital income. Income types without significant losses such as labor income or transfer incomes show higher tax shares and implicit tax rates if we account for losses. The opposite is true for capital income, in particular for income from renting and leasing. Chapter 3 presents two studies on business taxation, based on representative micro data sets from tax statistics and the microsimulation model BizTax. The first part provides a study on fundamental reform options for the German local business tax. We find that today’s high concentration of local business tax revenues on corporations with high profits decreases if the tax base is broadened by integrating more taxpayers and by including more elements of business value added. The reform scenarios with a broader tax base distribute the local business tax revenue per capita more equally across regional categories. The second study of Chapter 3 discusses the macroeconomic performance of business taxation against the background of corporate income. A comparison of the tax base reported in tax statistics with the macroeconomic corporate income from national accounts gives hints to considerable tax base erosion. The average implicit tax rate on corporate income was around 20 percent since 2001, and thus falling considerably short of statutory tax rates and effective tax rates discussed in the literature. For lack of detailed accounting data it is hard to give precise reasons for the presumptive tax base erosion. Chapter 4 deals with several assessment studies on the ecological tax reform implemented in Germany as of 1999. First, we describe the scientific, ideological, and political background of the ecological tax reform. Further, we present the main findings of a first systematic impact analysis. We employ two macroeconomic models, an econometric input-output model and a recursive-dynamic computable general equilibrium (CGE) model. Both models show that Germany’s ecological tax reform helps to reduce energy consumption and CO2 emissions without having a substantial adverse effect on overall economic growth. It could have a slightly positive effect on employment. The reform’s impact on the business sector and the effects of special provisions granted to agriculture and the goods and materials sectors are outlined in a further study. The special provisions avoid higher tax burdens on the energy-intensive production. However, they widely reduce the marginal tax rates and thus the incentives to energy saving. Though the reform of special provisions 2003 increased the overall tax burden of the energy-intensive industry, the enlarged eligibility for tax rebates neutralizes the ecologic incentives. Based on the Income and Consumption Survey of 2003, we have analyzed the distributional impact of the ecological tax reform. The increased energy taxes show a clear regressive impact relative to disposable income. Families with children face a higher tax burden relative to household income. The reduction of pension contributions and the automatic adjustment of social security transfers widely mitigate this regressive impact. Households with low income or with many children nevertheless bear a slight increase in tax burden. Refunding the eco tax revenue by an eco bonus would make the reform clearly progressive.
Und der Zukunft abgewandt
(2010)
Seit dem Ende der DDR, das den Zusammenbruch des Ostblocks und damit die Beendigung des »Kalten Kriegs« einleitete, wird verstärkt versucht, das Wesen dieses Staates zu definieren und damit seine Folgen auf wirtschaftlicher, sozialer, psychologischer und bildungspolitischer Ebene zu verstehen und einzuordnen. Alexandra Budke analysiert in diesem Band das Schulfach Geographie, das neben der Staatsbürgerkunde und der Geschichte ein zentrales Fach war und in dem die in den Lehrplänen definierte »staatsbürgerliche, weltanschauliche oder ideologische Erziehung« auf der Grundlage des Marxismus-Leninismus stattfinden sollte. Sie klärt, inwiefern Geographieunterricht in der DDR genutzt wurde, um geopolitische Interessen des Staates zu kommunizieren und zu verbreiten. Damit lässt sich durch die detaillierte Analyse des Fachunterrichts auch die Frage beantworten, ob SchülerInnen im Unterricht politisch manipuliert wurden und welche Handlungsmöglichkeiten die zentralen Akteure des Unterrichts, die LehrerInnen und die SchülerInnen, im Rahmen der durch die Bildungspolitik gesetzten curricularen Vorgaben wahrgenommen haben.
Controlling interactions in synthetic polymers as precisely as in proteins would have a strong impact on polymer science. Advanced structural and functional control can lead to rational design of, integrated nano- and microstructures. To achieve this, properties of monomer sequence defined oligopeptides were exploited. Through their incorporation as monodisperse segments into synthetic polymers we learned in recent four years how to program the structure formation of polymers, to adjust and exploit interactions in such polymers, to control inorganic-organic interfaces in fiber composites and induce structure in Biomacromolecules like DNA for biomedical applications.
This thesis deals with different aspects of flood risk in Germany. In twelve papers new scientific findings about flood hazards, factors that influence flood losses as well as effective private precautionary measures are presented. The seasonal distribution of flooding is shown for the whole of Germany. Furthermore, possible impacts of climate change on discharge and flood frequencies are estimated for the catchment of the river Rhine. Moreover, it is simulated at reaches of the Lower Rhine, which effects may result from levee breaches. Flood losses are the focus of the second part of the thesis: After the flood in August 2002 approximately 1700 households were interviewed by telephone. By this, it was possible to quantify the influence of different factors such as flood duration or the contamination of the flood water with oil on the extent of financial flood damage. On this basis, a new model was derived, by which flood losses can be calculated on a large scale. On the other hand, it was possible to derive recommendations for the improvement of private precaution. For example, the analysis revealed that insured households were compensated more quickly and to a better degree than uninsured. It became also clear that different groups like tenants and homeowners have different capabilities of performing precaution. This is to be considered in future risk communication. In 2005 and 2006, the rivers Elbe and Danube were again affected by flooding. A renewed pool among households and public authorities enabled us to investigate the improvement of flood risk management and the precaution in the City of Dresden. Several methods and finding of this thesis are applicable for water resources management issues and contribute to an improvement of flood risk analysis and management in Germany.
The uptake of nutrients and their subsequent chemical conversion by reactions which provide energy and building blocks for growth and propagation is a fundamental property of life. This property is termed metabolism. In the course of evolution life has been dependent on chemical reactions which generate molecules that are common and indispensable to all life forms. These molecules are the so-called primary metabolites. In addition, life has evolved highly diverse biochemical reactions. These reactions allow organisms to produce unique molecules, the so-called secondary metabolites, which provide a competitive advantage for survival. The sum of all metabolites produced by the complex network of reactions within an organism has since 1998 been called the metabolome. The size of the metabolome can only be estimated and may range from less than 1,000 metabolites in unicellular organisms to approximately 200,000 in the whole plant kingdom. In current biology, three additional types of molecules are thought to be important to the understanding of the phenomena of life: (1) the proteins, in other words the proteome, including enzymes which perform the metabolic reactions, (2) the ribonucleic acids (RNAs) which constitute the so-called transcriptome, and (3) all genes of the genome which are encoded within the double strands of desoxyribonucleic acid (DNA). Investigations of each of these molecular levels of life require analytical technologies which should best enable the comprehensive analysis of all proteins, RNAs, et cetera. At the beginning of this thesis such analytical technologies were available for DNA, RNA and proteins, but not for metabolites. Therefore, this thesis was dedicated to the implementation of the gas chromatography – mass spectrometry technology, in short GC-MS, for the in-parallel analysis of as many metabolites as possible. Today GC-MS is one of the most widely applied technologies and indispensable for the efficient profiling of primary metabolites. The main achievements and research topics of this work can be divided into technological advances and novel insights into the metabolic mechanisms which allow plants to cope with environmental stresses. Firstly, the GC-MS profiling technology has been highly automated and standardized. The major technological achievements were (1) substantial contributions to the development of automated and, within the limits of GC-MS, comprehensive chemical analysis, (2) contributions to the implementation of time of flight mass spectrometry for GC-MS based metabolite profiling, (3) the creation of a software platform for reproducible GC-MS data processing, named TagFinder, and (4) the establishment of an internationally coordinated library of mass spectra which allows the identification of metabolites in diverse and complex biological samples. In addition, the Golm Metabolome Database (GMD) has been initiated to harbor this library and to cope with the increasing amount of generated profiling data. This database makes publicly available all chemical information essential for GC-MS profiling and has been extended to a global resource of GC-MS based metabolite profiles. Querying the concentration changes of hundreds of known and yet non-identified metabolites has recently been enabled by uploading standardized, TagFinder-processed data. Long-term technological aims have been pursued with the central aims (1) to enhance the precision of absolute and relative quantification and (2) to enable the combined analysis of metabolite concentrations and metabolic flux. In contrast to concentrations which provide information on metabolite amounts, flux analysis provides information on the speed of biochemical reactions or reaction sequences, for example on the rate of CO2 conversion into metabolites. This conversion is an essential function of plants which is the basis of life on earth. Secondly, GC-MS based metabolite profiling technology has been continuously applied to advance plant stress physiology. These efforts have yielded a detailed description of and new functional insights into metabolic changes in response to high and low temperatures as well as common and divergent responses to salt stress among higher plants, such as Arabidopsis thaliana, Lotus japonicus and rice (Oryza sativa). Time course analysis after temperature stress and investigations into salt dosage responses indicated that metabolism changed in a gradual manner rather than by stepwise transitions between fixed states. In agreement with these observations, metabolite profiles of the model plant Lotus japonicus, when exposed to increased soil salinity, were demonstrated to have a highly predictive power for both NaCl accumulation and plant biomass. Thus, it may be possible to use GC-MS based metabolite profiling as a breeding tool to support the selection of individual plants that cope best with salt stress or other environmental challenges.
Computational cosmology
(2008)
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.
The Arctic plays a key role in Earth’s climate system as global warming is predicted to be most pronounced at high latitudes and because one third of the global carbon pool is stored in ecosystems of the northern latitudes. In order to improve our understanding of the present and future carbon dynamics in climate sensitive permafrost ecosystems, the present study concentrates on investigations of microbial controls of methane fluxes, on the activity and structure of the involved microbial communities, and on their response to changing environmental conditions. For this purpose an integrated research strategy was applied, which connects trace gas flux measurements to soil ecological characterisation of permafrost habitats and molecular ecological analyses of microbial populations. Furthermore, methanogenic archaea isolated from Siberian permafrost have been used as potential keystone organisms for studying and assessing life under extreme living conditions. Long-term studies on methane fluxes were carried out since 1998. These studies revealed considerable seasonal and spatial variations of methane emissions for the different landscape units ranging from 0 to 362 mg m-2 d-1. For the overall balance of methane emissions from the entire delta, the first land cover classification based on Landsat images was performed and applied for an upscaling of the methane flux data sets. The regionally weighted mean daily methane emissions of the Lena Delta (10 mg m-2 d-1) are only one fifth of the values calculated for other Arctic tundra environments. The calculated annual methane emission of the Lena Delta amounts to about 0.03 Tg. The low methane emission rates obtained in this study are the result of the used remotely sensed high-resolution data basis, which provides a more realistic estimation of the real methane emissions on a regional scale. Soil temperature and near soil surface atmospheric turbulence were identified as the driving parameters of methane emissions. A flux model based on these variables explained variations of the methane budget corresponding to continuous processes of microbial methane production and oxidation, and gas diffusion through soil and plants reasonably well. The results show that the Lena Delta contributes significantly to the global methane balance because of its extensive wetland areas. The microbiological investigations showed that permafrost soils are colonized by high numbers of microorganisms. The total biomass is comparable to temperate soil ecosystems. Activities of methanogens and methanotrophs differed significantly in their rates and distribution patterns along both the vertical profiles and the different investigated soils. The methane production rates varied between 0.3 and 38.9 nmol h-1 g-1, while the methane oxidation ranged from 0.2 to 7.0 nmol h-1 g-1. Phylogenetic analyses of methanogenic communities revealed a distinct diversity of methanogens affiliated to Methanomicrobiaceae, Methanosarcinaceae and Methanosaetaceae, which partly form four specific permafrost clusters. The results demonstrate the close relationship between methane fluxes and the fundamental microbiological processes in permafrost soils. The microorganisms do not only survive in their extreme habitat but also can be metabolic active under in situ conditions. It was shown that a slight increase of the temperature can lead to a substantial increase in methanogenic activity within perennially frozen deposits. In case of degradation, this would lead to an extensive expansion of the methane deposits with their subsequent impacts on total methane budget. Further studies on the stress response of methanogenic archaea, especially Methanosarcina SMA-21, isolated from Siberian permafrost, revealed an unexpected resistance of the microorganisms against unfavourable living conditions. A better adaptation to environmental stress was observed at 4 °C compared to 28 °C. For the first time it could be demonstrated that methanogenic archaea from terrestrial permafrost even survived simulated Martian conditions. The results show that permafrost methanogens are more resistant than methanogens from non-permafrost environments under Mars-like climate conditions. Microorganisms comparable to methanogens from terrestrial permafrost can be seen as one of the most likely candidates for life on Mars due to their physiological potential and metabolic specificity.
Proteins are chain molecules built from amino acids. The precise sequence of the 20 different types of amino acids in a protein chain defines into which structure a protein folds, and the three-dimensional structure in turn specifies the biological function of the protein. The reliable folding of proteins is a prerequisite for their robust function. Misfolding can lead to protein aggregates that cause severe diseases, such as Alzheimer's, Parkinson's, or the variant Creutzfeldt-Jakob disease. Small single-domain proteins often fold without experimentally detectable metastable intermediate states. The folding dynamics of these proteins is thought to be governed by a single transition-state barrier between the unfolded and the folded state. The transition state is highly instable and cannot be observed directly. However, mutations in which a single amino acid of the protein is substituted by another one can provide indirect access. The mutations slightly change the transition-state barrier and, thus, the folding and unfolding times of the protein. The central question is how to reconstruct the transition state from the observed changes in folding times. In this habilitation thesis, a novel method to extract structural information on transition states from mutational data is presented. The method is based on (i) the cooperativity of structural elements such as alpha-helices and beta-hairpins, and (ii) on splitting up mutation-induced free-energy changes into components for these elements. By fitting few parameters, the method reveals the degree of structure formation of alpha-helices and beta-hairpins in the transition state. In addition, it is shown in this thesis that the folding routes of small single-domain proteins are dominated by loop-closure dependencies between the structural elements.
The role played by azobenzene polymers in the modern photonic, electronic and opto-mechanical applications cannot be underestimated. These polymers are successfully used to produce alignment layers for liquid crystalline fluorescent polymers in the display and semiconductor technology, to build waveguides and waveguide couplers, as data storage media and as labels in quality product protection. A very hot topic in modern research are light-driven artificial muscles based on azobenzene elastomers. The incorporation of azobenzene chromophores into polymer systems via covalent bonding or even by blending gives rise to a number of unusual effects under visible (VIS) and ultraviolet light irradiation. The most amazing effect is the inscription of surface relief gratings (SRGs) onto thin azobenzene polymer films. At least seven models have been proposed to explain the origin of the inscribing force but none of them describes satisfactorily the light induced material transport on the molecular level. In most models, to explain the mass transport over micrometer distances during irradiation at room temperature, it is necessary to assume a considerable degree of photoinduced softening, at least comparable with that at the glass transition. Contrary to this assumption, we have gathered a convincing evidence that there is no considerable softening of the azobenzene layers under illumination. Presently we can surely say that light induced softening is a very weak accompanying effect rather than a necessary condition for the formation of SRGs. This means that the inscribing force should be above the yield point of the azobenzene polymer. Hence, an appropriate approach to describe the formation and relaxation of SRGs is a viscoplastic theory. It was used to reproduce pulse-like inscription of SRGs as measured by VIS light scattering. At longer inscription times the VIS scattering pattern exhibits some peculiarities which can be explained by the appearance of a density grating that will be shown to arise due to the final compressibility of the polymer film. As a logical consequence of the aforementioned research, a thermodynamic theory explaining the light-induced deformation of free standing films and the formation of SRGs is proposed. The basic idea of this theory is that under homogeneous illumination an initially isotropic sample should stretch itself along the polarization direction to compensate the entropy decrease produced by the photoinduced reorientation of azobenzene chromophores. Finally, some ideas about further development of this controversial topic will be discussed.
Phonology limited
(2007)
Phonology Limited is a study of the areas of phonology where the application of optimality theory (OT) has previously been problematic. Evidence from a wide variety of phenomena in a wide variety of languages is presented to show that interactions involving more than just faithfulness and markedness are best analyzed as involving language-specific morphological constraints rather than universal phonological constraints. OT has proved to be a highly insightful and successful theory of linguistics in general and phonology in particular, focusing as it does on surface forms and treating the relationship between inputs and outputs as a form of conflict resolution. Yet there have also been a number of serious problems with the approach that have led some detractors to argue that OT has failed as a theory of generative grammar. The most serious of these problems is opacity, defined as a state of affairs where the grammatical output of a given input appears to violate more constraints than an ungrammatical competitor. It is argued that these problems disappear once language-specific morphological constraints are allowed to play a significant role in analysis. Specifically, a number of processes of Tiberian Hebrew traditionally considered opaque are reexamined and shown to be straightforwardly transparent, but crucially involving morphological constraints on form, such as a constraint requiring certain morphological forms to end with a syllabic trochee, or a constraint requiring paradigm uniformity with regard to the occurrence of fricative allophones of stop phonemes. Language-specific morphological constraints are also shown to play a role in allomorphy, where a lexeme is associated with more than one input; the constraint hierarchy then decides which input is grammatical in which context. For example, [ɨ]/[ə] and [u]/[ə] alternation found in some lexemes but not in others in Welsh is attributed to the presence of two inputs for the lexemes with the alternation. A novel analysis of the initial consonant mutations of the modern Celtic languages argues that mutated forms are separately listed inputs chosen in appropriate contexts by constraints on morphology and syntax, rather than being outputs that are phonologically unfaithful to their unmutated inputs. Finally, static irregularities and lexical exceptions are examined and shown to be attributable to language-specific morphological constraints. In American English, the distribution of tense and lax vowels is predictable in several contexts; however, in some contexts, the distributions of tense [ɔ] vs. lax [a] and of tense [æ] vs. lax [æ] are not as expected. It is shown that clusters of output-output faithfulness constraints create a pattern to which words are attracted, which however violates general phonological considerations. New words that enter the language first obey the general phonological considerations before being attracted into the language-specific exceptional pattern.
Biogene Amine sind kleine organische Verbindungen, die sowohl bei Wirbeltieren als auch bei Wirbellosen als Neurotransmitter, Neuromodulatoren und/oder Neurohormone wirken können. Sie bilden eine bedeutende Gruppe von Botenstoffen und entfalten ihre Wirkungen über die Bindung an eine bestimmte Klasse von Rezeptorproteinen, die als G-Protein-gekoppelte Rezeptoren bezeichnet werden. Bei Insekten gehören zur Substanzklasse der biogenen Amine die Botenstoffe Dopamin, Tyramin, Octopamin, Serotonin und Histamin. Neben vielen anderen Wirkung ist z.B. gezeigt worden, daß einige dieser biogenen Amine bei der Honigbiene (Apis mellifera) die Geschmacksempfindlichkeit für Zuckerwasser-Reize modulieren können. Ich habe verschiedene Aspekte der aminergen Signaltransduktion an den „Modellorganismen“ Honigbiene und Amerikanische Großschabe (Periplaneta americana) untersucht. Aus der Honigbiene, einem „Modellorganismus“ für das Studium von Lern- und Gedächtnisvorgängen, wurden zwei Dopamin-Rezeptoren, ein Tyramin-Rezeptor, ein Octopamin-Rezeptor und ein Serotonin-Rezeptor charakterisiert. Die Rezeptoren wurden in kultivierten Säugerzellen exprimiert, um ihre pharmakologischen und funktionellen Eigenschaften (Kopplung an intrazelluläre Botenstoffwege) zu analysieren. Weiterhin wurde mit Hilfe verschiedener Techniken (RT-PCR, Northern-Blotting, in situ-Hybridisierung) untersucht, wo und wann während der Entwicklung die entsprechenden Rezeptor-mRNAs im Gehirn der Honigbiene exprimiert werden. Als Modellobjekt zur Untersuchung der zellulären Wirkungen biogener Amine wurden die Speicheldrüsen der Amerikanischen Großschabe genutzt. An isolierten Speicheldrüsen läßt sich sowohl mit Dopamin als auch mit Serotonin Speichelproduktion auslösen, wobei Speichelarten unterschiedlicher Zusammensetzung gebildet werden. Dopamin induziert die Bildung eines völlig proteinfreien, wäßrigen Speichels. Serotonin bewirkt die Sekretion eines proteinhaltigen Speichels. Die Serotonin-induzierte Proteinsekretion wird durch eine Erhöhung der Konzentration des intrazellulären Botenstoffs cAMP vermittelt. Es wurden die pharmakologischen Eigenschaften der Dopamin-Rezeptoren der Schaben-Speicheldrüsen untersucht sowie mit der molekularen Charakterisierung putativer aminerger Rezeptoren der Schabe begonnen. Weiterhin habe ich das ebony-Gen der Schabe charakterisiert. Dieses Gen kodiert für ein Enzym, das wahrscheinlich bei der Schabe (wie bei anderen Insekten) an der Inaktivierung biogener Amine beteiligt ist und im Gehirn und in den Speicheldrüsen der Schabe exprimiert wird.
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging.
Immobilisierung bzw. Mobilisierung und Transport von Schadstoffen in der Umwelt, besonders in den Kompartimenten Boden und Wasser, sind von fundamentaler Bedeutung für unser (Über)Leben auf der Erde. Einer der Hauptreaktionspartner für organische und anorganische Schadstoffe (Xenobiotika) in der Umwelt sind Huminstoffe (HS). HS sind Abbauprodukte pflanzlichen und tierischen Gewebes, die durch eine Kombination von chemischen und biologischen Ab- und Umbauprozessen entstehen. Bedingt durch ihre Genese stellen HS außerordentlich heterogene Stoffsysteme dar, die eine Palette von verschiedenartigen Wechselwirkungen mit Schadstoffen zeigen. Die Untersuchung der fundamentalen Wechselwirkungsmechanismen stellt ebenso wie deren quantitative Beschreibung höchste Anforderungen an die Untersuchungsmethoden. Zur qualitativen und quantitativen Charakterisierung der Wechselwirkungen zwischen HS und Xenobiotika werden demnach analytische Methoden benötigt, die bei der Untersuchung von extrem heterogenen Systemen aussagekräftige Daten zu liefern vermögen. Besonders spektroskopische Verfahren, wie z.B. lumineszenz-basierte Verfahren, besitzen neben der hervorragenden Selektivität und Sensitivität, auch eine Multidimensionalität (bei der Lumineszenz sind es die Beobachtungsgrößen Intensität IF, Anregungswellenlänge lex, Emissionswellenlänge lem und Fluoreszenzabklingzeit tF), die es gestattet, auch heterogene Systeme wie HS direkt zu untersuchen. Zur Charakterisierung können sowohl die intrinsischen Fluoreszenzeigenschaften der HS als auch die von speziell eingeführten Lumineszenzsonden verwendet werden. In beiden Fällen werden die zu Grunde liegenden fundamentalen Konzepte der Wechselwirkungen von HS mit Xenobiotika untersucht und charakterisiert. Für die intrinsische Fluoreszenz der HS konnte gezeigt werden, dass neben molekularen Strukturen besonders die Verknüpfung der Fluorophore im Gesamt-HS-Molekül von Bedeutung ist. Konformative Freiheit und die Nachbarschaft zu als Energieakzeptor fungierenden HS-eigenen Gruppen sind wichtige Komponenten für die Charakteristik der HS-Fluoreszenz. Die Löschung der intrinsischen Fluoreszenz durch Metallkomplexierung ist demnach auch das Resultat der veränderten konformativen Freiheit der HS durch die gebundenen Metallionen. Es zeigte sich, dass abhängig vom Metallion sowohl Löschung als auch Verstärkung der intrinsischen HS-Fluoreszenz beobachtet werden kann. Als extrinsische Lumineszenzsonden mit wohl-charakterisierten photophysikalischen Eigenschaften wurden polyzyklische aromatische Kohlenwasserstoffe und Lanthanoid-Ionen eingesetzt. Durch Untersuchungen bei sehr niedrigen Temperaturen (10 K) konnte erstmals die Mikroumgebung von an HS gebundenen hydrophoben Xenobiotika untersucht werden. Im Vergleich mit Raumtemperaturexperimenten konnte gezeigt werden, dass hydrophobe Xenobiotika an HS-gebunden in einer Mikroumgebung, die in ihrer Polarität analog zu kurzkettigen Alkoholen ist, vorliegen. Für den Fall der Metallkomplexierung wurden Energietransferprozesse zwischen HS und Lanthanoidionen bzw. zwischen verschiedenen, gebundenen Lanthanoidionen untersucht. Basierend auf diesen Messungen können Aussagen über die beteiligten elektronischen Zustände der HS einerseits und Entfernungen von Metallbindungsstellen in HS selbst angeben werden. Es ist dabei zu beachten, dass die Experimente in Lösung bei realen Konzentrationen durchgeführt wurden. Aus Messung der Energietransferraten können direkte Aussagen über Konformationsänderungen bzw. Aggregationsprozesse von HS abgeleitet werden.
Kaliumionen (K<sup>+) sind die am häufigsten vorkommenden anorganischen Kationen in Pflanzen. Gemessen am Trockengewicht kann ihr Anteil bis zu 10% ausmachen. Kaliumionen übernehmen wichtige Funktionen in verschiedenen Prozessen in der Pflanze. So sind sie z.B. essentiell für das Wachstum und für den Stoffwechsel. Viele wichtige Enzyme arbeiten optimal bei einer K<sup>+ Konzentration im Bereich von 100 mM. Aus diesem Grund halten Pflanzenzellen in ihren Kompartimenten, die am Stoffwechsel beteiligt sind, eine kontrollierte Kaliumkonzentration von etwa 100 mM aufrecht. Die Aufnahme von Kaliumionen aus dem Erdreich und deren Transport innerhalb der Pflanze und innerhalb einer Pflanzenzelle wird durch verschiedene Kaliumtransportproteine ermöglicht. Die Aufrechterhaltung einer stabilen K<sup>+ Konzentration ist jedoch nur möglich, wenn die Aktivität dieser Transportproteine einer strikten Kontrolle unterliegt. Die Prozesse, die die Transportproteine regulieren, sind bis heute nur ansatzweise verstanden. Detailliertere Kenntnisse auf diesem Gebiet sind aber von zentraler Bedeutung für das Verständnis der Integration der Transportproteine in das komplexe System des pflanzlichen Organismus. In dieser Habilitationsschrift werden eigene Publikationen zusammenfassend dargestellt, in denen die Untersuchungen verschiedener Regulationsmechanismen pflanzlicher Kaliumkanäle beschrieben werden. Diese Untersuchungen umfassen ein Spektrum aus verschiedenen proteinbiochemischen, biophysikalischen und pflanzenphysiologischen Analysen. Um die Regulationsmechanismen grundlegend zu verstehen, werden zum einen ihre strukturellen und molekularen Besonderheiten untersucht. Zum anderen werden die biophysikalischen und reaktionskinetischen Zusammenhänge der Regulationsmechanismen analysiert. Die gewonnenen Erkenntnisse erlauben eine neue, detailliertere Interpretation der physiologischen Rolle der Kaliumtransportproteine in der Pflanze.
It has been known for several years that under certain conditions electrons can be confined within thin layers even if these layers consist of metal and are supported by a metal substrate. In photoelectron spectra, these layers show characteristic discrete energy levels and it has turned out that these lead to large effects like the oscillatory magnetic coupling technically exploited in modern hard disk reading heads. The current work asks in how far the concepts underlying quantization in two-dimensional films can be transferred to lower dimensionality. This problem is approached by a stepwise transition from two-dimensional layers to one-dimensional nanostructures. On the one hand, these nanostructures are represented by terraces on atomically stepped surfaces, on the other hand by atom chains which are deposited onto these terraces up to complete coverage by atomically thin nanostripes. Furthermore, self organization effects are used in order to arrive at perfectly one-dimensional atomic arrangements at surfaces. Angle-resolved photoemission is particularly suited as method of investigation because is reveals the behavior of the electrons in these nanostructures in dependence of the spacial direction which distinguishes it from, e. g., scanning tunneling microscopy. With this method intense and at times surprisingly large effects of one-dimensional quantization are observed for various exemplary systems, partly for the first time. The essential role of bandgaps in the substrate known from two-dimensional systems is confirmed for nanostructures. In addition, we reveal an ambiguity without precedent in two-dimensional layers between spacial confinement of electrons on the one side and superlattice effects on the other side as well as between effects caused by the sample and by the measurement process. The latter effects are huge and can dominate the photoelectron spectra. Finally, the effects of reduced dimensionality are studied in particular for the d electrons of manganese which are additionally affected by strong correlation effects. Surprising results are also obtained here. ---------------------------- Die Links zur jeweiligen Source der im Appendix beigefügten Veröffentlichungen befinden sich auf Seite 83 des Volltextes.
In this work, the basic principles of self-organization of diblock copolymers having the in¬herent property of selective or specific non-covalent binding were examined. By the introduction of electrostatic, dipole–dipole, or hydrogen bonding interactions, it was hoped to add complexity to the self-assembled mesostructures and to extend the level of ordering from the nanometer to a larger length scale. This work may be seen in the framework of biomimetics, as it combines features of synthetic polymer and colloid chemistry with basic concepts of structure formation applying in supramolecular and biological systems. The copolymer systems under study were (i) block ionomers, (ii) block copolymers with acetoacetoxy chelating units, and (iii) polypeptide block copolymers.
The behaviour of an adhering cell is strongly influenced by the chemical, topographical and mechanical properties of the surface it attaches to. During recent years, it has been found experimentally that adhering cells actively sense the elastic properties of their environment by pulling on it through numerous sites of adhesion. The resulting build-up of force at sites of adhesion depends on the elastic properties of the environment and is converted into corresponding biochemical signals, which can trigger cellular programmes like growth, differentiation, apoptosis, and migration. In general, force is an important regulator of biological systems, for example in hearing and touch, in wound healing, and in rolling adhesion of leukocytes on vessel walls. In the habilitation thesis by Ulrich Schwarz, several theoretical projects are presented which address the role of forces and elasticity in cell adhesion. (1) A new method has been developed for calculating cellular forces exerted at sites of focal adhesion on micro-patterned elastic substrates. The main result is that cell-matrix contacts function as mechanosensors, converting internal force into protein aggregation. (2) A one-step master equation for the stochastic dynamics of adhesion clusters as a function of cluster size, rebinding rate and force has been solved both analytically and numerically. Moreover this model has been applied to the regulation of cell-matrix contacts, to dynamic force spectroscopy, and to rolling adhesion. (3) Using linear elasticity theory and the concept of force dipoles, a model has been introduced and solved which predicts the positioning and orientation of mechanically active cells in soft material, in good agreement with experimental observations for fibroblasts on elastic substrates and in collagen gels.
Understanding the formation of stars in galaxies is central to much of modern astrophysics. For several decades it has been thought that the star formation process is primarily controlled by the interplay between gravity and magnetostatic support, modulated by neutral-ion drift. Recently, however, both observational and numerical work has begun to suggest that supersonic interstellar turbulence rather than magnetic fields controls star formation. This review begins with a historical overview of the successes and problems of both the classical dynamical theory of star formation, and the standard theory of magnetostatic support from both observational and theoretical perspectives. We then present the outline of a new paradigm of star formation based on the interplay between supersonic turbulence and self-gravity. Supersonic turbulence can provide support against gravitational collapse on global scales, while at the same time it produces localized density enhancements that allow for collapse on small scales. The efficiency and timescale of stellar birth in Galactic gas clouds strongly depend on the properties of the interstellar turbulent velocity field, with slow, inefficient, isolated star formation being a hallmark of turbulent support, and fast, efficient, clustered star formation occurring in its absence. After discussing in detail various theoretical aspects of supersonic turbulence in compressible self-gravitating gaseous media relevant for star forming interstellar clouds, we explore the consequences of the new theory for both local star formation and galactic scale star formation. The theory predicts that individual star-forming cores are likely not quasi-static objects, but dynamically evolving. Accretion onto these objects will vary with time and depend on the properties of the surrounding turbulent flow. This has important consequences for the resulting stellar mass function. Star formation on scales of galaxies as a whole is expected to be controlled by the balance between gravity and turbulence, just like star formation on scales of individual interstellar gas clouds, but may be modulated by additional effects like cooling and differential rotation. The dominant mechanism for driving interstellar turbulence in star-forming regions of galactic disks appears to be supernovae explosions. In the outer disk of our Milky Way or in low-surface brightness galaxies the coupling of rotation to the gas through magnetic fields or gravity may become important.
We theoretically discuss the interaction of neutral particles (atoms, molecules) with surfaces in the regime where it is mediated by the electromagnetic field. A thorough characterization of the field at sub-wavelength distances is worked out, including energy density spectra and coherence functions. The results are applied to typical situations in integrated atom optics, where ultracold atoms are coupled to a thermal surface, and to single molecule probes in near field optics, where sub-wavelength resolution can be achieved.
Electrets are materials capable of storing oriented dipoles or an electric surplus charge for long periods of time. The term "electret" was coined by Oliver Heaviside in analogy to the well-known word "magnet". Initially regarded as a mere scientific curiosity, electrets became increasingly imporant for applications during the second half of the 20th century. The most famous example is the electret condenser microphone, developed in 1962 by Sessler and West. Today, these devices are produced in annual quantities of more than 1 billion, and have become indispensable in modern communications technology. Even though space-charge electrets are widely used in transducer applications, relatively little was known about the microscopic mechanisms of charge storage. It was generally accepted that the surplus charges are stored in some form of physical or chemical traps. However, trap depths of less than 2 eV, obtained via thermally stimulated discharge experiments, conflicted with the observed lifetimes (extrapolations of experimental data yielded more than 100000 years). Using a combination of photostimulated discharge spectroscopy and simultaneous depth-profiling of the space-charge density, the present work shows for the first time that at least part of the space charge in, e.g., polytetrafluoroethylene, polypropylene and polyethylene terephthalate is stored in traps with depths of up to 6 eV, indicating major local structural changes. Based on this information, more efficient charge-storing materials could be developed in the future. The new experimental results could only be obtained after several techniques for characterizing the electrical, electromechanical and electrical properties of electrets had been enhanced with in situ capability. For instance, real-time information on space-charge depth-profiles were obtained by subjecting a polymer film to short laser-induced heat pulses. The high data acquisition speed of this technique also allowed the three-dimensional mapping of polarization and space-charge distributions. A highly active field of research is the development of piezoelectric sensor films from electret polymer foams. These materials store charges on the inner surfaces of the voids after having been subjected to a corona discharge, and exhibit piezoelectric properties far superior to those of traditional ferroelectric polymers. By means of dielectric resonance spectroscopy, polypropylene foams (presently the most widely used ferroelectret) were studied with respect to their thermal and UV stability. Their limited thermal stability renders them unsuitable for applications above 50 °C. Using a solvent-based foaming technique, we found an alternative material based on amorphous Teflon® AF, which exhibits a stable piezoelectric coefficient of 600 pC/N at temperatures up to 120 °C.
Es ist schon seit längerer Zeit bekannt, dass nach Kontakt des Biomaterials mit der biologischen Umgebung bei Implantation oder extrakorporaler Wechselwirkung zunächst Proteine aus dem umgebenden Milieu adsorbiert werden, wobei die Oberflächeneigenschaften des Materials die Zusammensetzung der Proteinschicht und die Konformation der darin enthaltenden Proteine determinieren. Die nachfolgende Wechselwirkung von Zellen mit dem Material wird deshalb i.d.R. von der Adsorbatschicht vermittelt. Der Einfluss der Oberflächen auf die Zusammensetzung und Konformation der Proteine und die nachfolgende Wechselwirkung mit Zellen ist von besonderem Interesse, da einerseits eine Aussage über die Anwendbarkeit ermöglicht wird, andererseits Erkenntnisse über diese Zusammenhänge für die Entwicklung neuer Materialien mit verbesserter Biokompatibilität genutzt werden können. In der vorliegenden Habilitationsschrift wurde deshalb der Einfluss der Zusammensetzung von Polymeren bzw. von deren Oberflächeneigenschaften auf die Adsorption von Proteinen, den Aktivitätszustand der plasmatischen Gerinnung und die Adhäsion von Zellen untersucht. Dabei wurden auch Möglichkeiten zur Beeinflussung dieser Vorgänge über eine Veränderung der Volumenzusammensetzung oder durch Oberflächenmodifikationen von Biomaterialien vorgestellt. Erkenntnisse aus diesen Arbeiten konnten für die Entwicklung von Membranen für Biohybrid-Organe genutzt werden.
In a classical context, synchronization means adjustment of rhythms of self-sustained periodic oscillators due to their weak interaction. The history of synchronization goes back to the 17th century when the famous Dutch scientist Christiaan Huygens reported on his observation of synchronization of pendulum clocks: when two such clocks were put on a common support, their pendula moved in a perfect agreement. In rigorous terms, it means that due to coupling the clocks started to oscillate with identical frequencies and tightly related phases. Being, probably, the oldest scientifically studied nonlinear effect, synchronization was understood only in 1920-ies when E. V. Appleton and B. Van der Pol systematically - theoretically and experimentally - studied synchronization of triode generators. Since that the theory was well developed and found many applications. Nowadays it is well-known that certain systems, even rather simple ones, can exhibit chaotic behaviour. It means that their rhythms are irregular, and cannot be characterized only by one frequency. However, as is shown in the Habilitation work, one can extend the notion of phase for systems of this class as well and observe their synchronization, i.e., agreement of their (still irregular!) rhythms: due to very weak interaction there appear relations between the phases and average frequencies. This effect, called phase synchronization, was later confirmed in laboratory experiments of other scientific groups. Understanding of synchronization of irregular oscillators allowed us to address important problem of data analysis: how to reveal weak interaction between the systems if we cannot influence them, but can only passively observe, measuring some signals. This situation is very often encountered in biology, where synchronization phenomena appear on every level - from cells to macroscopic physiological systems; in normal states as well as in severe pathologies. With our methods we found that cardiovascular and respiratory systems in humans can adjust their rhythms; the strength of their interaction increases with maturation. Next, we used our algorithms to analyse brain activity of Parkinsonian patients. The results of this collaborative work with neuroscientists show that different brain areas synchronize just before the onset of pathological tremor. Morevoever, we succeeded in localization of brain areas responsible for tremor generation.
Our every-day experience is connected with different acoustical noise or music. Usually noise plays the role of nuisance in any communication and destroys any order in a system. Similar optical effects are known: strong snowing or raining decreases quality of a vision. In contrast to these situations noisy stimuli can also play a positive constructive role, e.g. a driver can be more concentrated in a presence of quiet music. Transmission processes in neural systems are of especial interest from this point of view: excitation or information will be transmitted only in the case if a signal overcomes a threshold. Dr. Alexei Zaikin from the Potsdam University studies noise-induced phenomena in nonlinear systems from a theoretical point of view. Especially he is interested in the processes, in which noise influences the behaviour of a system twice: if the intensity of noise is over a threshold, it induces some regular structure that will be synchronized with the behaviour of neighbour elements. To obtain such a system with a threshold one needs one more noise source. Dr. Zaikin has analyzed further examples of such doubly stochastic effects and developed a concept of these new phenomena. These theoretical findings are important, because such processes can play a crucial role in neurophysics, technical communication devices and living sciences.
Highly collimated, high velocity streams of hot plasma – the jets – are observed as a general phenomenon being found in a variety of astrophysical objects regarding their size and energy output. Known as jet sources are protostellar objects (T Tauri stars, embedded IR sources), galactic high energy sources ("microquasars"), and active galactic nuclei (extragalactic radio sources and quasars). Within the last two decades our knowledge regarding the processes involved in astro-physical jet formation has condensed in a kind of standard model. This is the scenario of a magnetohydrodynamically accelerated and collimated jet stream launched from the innermost part of an accretion disk close to the central object. Traditionally, the problem of jet formation is divided in two categories. One is the question how to collimate and accelerate an uncollimated low velocity disk wind into a jet. The second is the question how to initiate that outflow from a disk, i.e. how to turn accretion of matter into an ejection as a disk wind. My own work is mainly related to the first question, the collimation and acceleration process. Due to the complexity of both, the physical processes believed to be responsible for the jet launching and also the spatial configuration of the physical components of the jet source, the enigma of jet formation is not yet completely understood. On the theoretical side, there has been a substantial advancement during the last decade from purely station-ary models to time-dependent simulations lead by the vast increase of computer power. Observers, on the other hand, do not yet have the instruments at hand in order to spatially resolve observe the very jet origin. It can be expected that also the next years will yield a substantial improvement on both tracks of astrophysical research. Three-dimensional magnetohydrodynamic simu-lations will improve our understanding regarding the jet-disk interrelation and the time-dependent character of jet formation, the generation of the magnetic field in the jet source, and the interaction of the jet with the ambient medium. Another step will be the combina-tion of radiation transfer computations and magnetohydrodynamic simulations providing a direct link to the observations. At the same time, a new generation of telescopes (VLT, NGST) in combination with new instrumental techniques (IR-interferometry) will lead to a "quantum leap" in jet observation, as the resolution will then be sufficient in order to zoom into the innermost region of jet formation.
Line driven winds are accelerated by the momentum transfer from photons to a plasma, by absorption and scattering in numerous spectral lines. Line driving is most efficient for ultraviolet radiation, and at plasma temperatures from 10^4 K to 10^5 K. Astronomical objects which show line driven winds include stars of spectral type O, B, and A, Wolf-Rayet stars, and accretion disks over a wide range of scales, from disks in young stellar objects and cataclysmic variables to quasar disks. It is not yet possible to solve the full wind problem numerically, and treat the combined hydrodynamics, radiative transfer, and statistical equilibrium of these flows. The emphasis in the present writing is on wind hydrodynamics, with severe simplifications in the other two areas. I consider three topics in some detail, for reasons of personal involvement. 1. Wind instability, as caused by Doppler de-shadowing of gas parcels. The instability causes the wind gas to be compressed into dense shells enclosed by strong shocks. Fast clouds occur in the space between shells, and collide with the latter. This leads to X-ray flashes which may explain the observed X-ray emission from hot stars. 2. Wind runaway, as caused by a new type of radiative waves. The runaway may explain why observed line driven winds adopt fast, critical solutions instead of shallow (or breeze) solutions. Under certain conditions the wind settles on overloaded solutions, which show a broad deceleration region and kinks in their velocity law. 3. Magnetized winds, as launched from accretion disks around stars or in active galactic nuclei. Line driving is assisted by centrifugal forces along co-rotating poloidal magnetic field lines, and by Lorentz forces due to toroidal field gradients. A vortex sheet starting at the inner disk rim can lead to highly enhanced mass loss rates.
One of the classical ways to describe the dynamics of nonlinear systems is to analyze theur Fourier spectra. For periodic and quasiperiodic processes the Fourier spectrum consists purely of discrete delta-functions. On the contrary, the spectrum of a chaotic motion is marked by the presence of the continuous component. In this work, we describe the peculiar, neither regular nor completely chaotic state with so called singular-continuous power spectrum. Our investigations concern various cases from most different fields, where one meets the singular continuous (fractal) spectra. The examples include both the physical processes which can be reduced to iterated discrete mappings or even symbolic sequences, and the processes whose description is based on the ordinary or partial differential equations.
Intermolekulare Desaktivierung zwischen einem angeregten Fluorophor und einem Löscher durch Elektronenübertragung kann mit dynamischer und statischer Löschung beschrieben werden. Es wird vorgeschlagen den dynamischen Löschprozess in Transport- und Wechselwirkungsphase einzuteilen. Ergebnisse der Löschung der N-Heteroarene durch Naphthalen bei hohen Löscherkonzentrationen werden mit der statischen Löschung beschrieben. Außerdem werden CT-Systeme untersucht. Nach einem Überblick über statische Modelle zum Resonanzenergietransfer wird ein aus der Treffertheorie abgeleitetes Modell vorgestellt und an Beispielen getestet. Die Experimente sind computergesteuert.
Individuals differ in their tendency to perceive injustice and in their responses towards these perceptions. Those high in justice sensitivity tend to show intense negative affective, cognitive, and behavioral responses towards injustice that in part also depend on the perspective from which injustice is perceived. The present research project showed that inter-individual differences in justice sensitivity may already be measured and observed in childhood and adolescence and that early adolescence seems an important age-range and developmental stage for the stabilization of these differences. Furthermore, the different justice sensitivity perspectives were related to different forms of externalizing (aggression, ADHD, bullying) and internalizing problem behavior (depressive symptoms) both in children and adolescents as well as in adults in cross-sectional studies. Particularly victim sensitivity may apparently constitute an important risk factor for a broad range of both externalizing and internalizing maladaptive behaviors and mental health problems as shown in those studies using longitudinal data. Regarding aggressive behavior, victim justice sensitivity may even constitute a risk factor above and beyond other important and well-established risk factors for aggression and similar sensitivity constructs that had previously been linked to this kind of behavior. In contrast, observer and perpetrator sensitivity (perpetrator sensitivity in particular) tended to show negative links with externalizing problem behavior and instead predicted prosocial behavior in children and adolescents. However, there were also detached positive relations of perpetrator sensitivity with emotional problems as well as of observer sensitivity with reactive aggression and depressive symptoms. Taken together, the findings from the present research show that justice sensitivity forms in childhood at the latest and that it may have important, long-term influences on pro- and antisocial behavior and mental health. Thus, justice sensitivity requires more attention in research on the prevention and intervention of mental health problems and antisocial behavior, such as aggression.