Refine
Year of publication
Document Type
- Habilitation Thesis (104) (remove)
Keywords
- Biophysik (3)
- biophysics (3)
- Datenanalyse (2)
- Oberfläche (2)
- Resonanzenergietransfer (2)
- Selbstorganisation (2)
- Stochastische Prozesse (2)
- Zelladhäsion (2)
- carbon nitride (2)
- cell adhesion (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Biochemie und Biologie (17)
- Institut für Chemie (16)
- Institut für Geowissenschaften (12)
- Institut für Umweltwissenschaften und Geographie (6)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Ernährungswissenschaft (5)
- Institut für Romanistik (4)
- Department Psychologie (2)
- Extern (2)
Potentiality of nanosized materials has been largely proved but a closer look shows that a significant percentage of this research is related to oxides and metals, while the number drastically drops for metallic ceramics, namely transition metal nitrides and metal carbides. The lack of related publications do not reflect their potential but rather the difficulties related to their synthesis as dense and defect-free structures, fundamental prerequisites for advanced mechanical applications.
The present habilitation work aims to close the gap between preparation and processing, indicating novel synthetic pathways for a simpler and sustainable synthesis of transition metal nitride (MN) and carbide (MC) based nanostructures and easier processing thereafter. In spite of simplicity and reliability, the designed synthetic processes allow the production of functional materials, with the demanded size and morphology.
The goal was achieved exploiting classical and less-classical precursors, ranging from common metal salts and molecules (e.g. urea, gelatin, agar, etc), to more exotic materials, such as leafs, filter paper and even wood. It was found that the choice of precursors and reaction conditions makes it possible to control chemical composition (going for instance from metal oxides to metal oxy-nitrides to metal nitrides, or from metal nitrides to metal carbides, up to quaternary systems), size (from 5 to 50 nm) and morphology (going from mere spherical nanoparticles to rod-like shapes, fibers, layers, meso-porous and hierarchical structures, etc). The nature of the mixed precursors also allows the preparation of metal nitrides/carbides based nanocomposites, thus leading to multifunctional materials (e.g. MN/MC@C, MN/MC@PILs, etc) but also allowing dispersion in liquid media. Control over composition, size and morphology is obtained with simple adjustment of the main route, but also coupling it with processes such as electrospin, aerosol spray, bio-templating, etc. Last but not least, the nature of the precursor materials also allows easy processing, including printing, coating, casting, film and thin layers preparation, etc).
The designed routes are, concept-wise, similar and they all start by building up a secondary metal ion-N/C precursor network, which converts, upon heat treatment, into an intermediate “glass”. This glass stabilizes the nascent nanoparticles during their nucleation and impairs their uncontrolled growth during the heat treatment (scheme 1). This way, one of the main problems related to the synthesis of MN/MC, i.e. the need of very high temperature, could also be overcome (from up to 2000°C, for classical synthesis, down to 700°C in the present cases). The designed synthetic pathways are also conceived to allow usage of non-toxic compounds and to minimize (or even avoid) post-synthesis purification, still bringing to phase pure and well-defined (crystalline) nanoparticles.
This research aids to simplify the preparation of MN/MC, making these systems now readily available in suitable amounts both for fundamental and applied science. The prepared systems have been tested (in some cases for the first time) in many different fields, e.g. battery (MnN0.43@C shown a capacity stabilized at a value of 230 mAh/g, with coulombic efficiencies close to 100%), as alternative magnetic materials (Fe3C nanoparticles were prepared with different size and therefore different magnetic behavior, superparamagnetic or ferromagnetic, showing a saturation magnetization value up to 130 emu/g, i.e. similar to the value expected for the bulk material), as filters and for the degradation of organic dyes (outmatching the performance of carbon), as catalysts (both as active phase but also as active support, leading to high turnover rate and, more interesting, to tunable selectivity). Furthermore, with this route, it was possible to prepare for the first time, to the best of our knowledge, well-defined and crystalline MnN0.43, Fe3C and Zn1.7GeN1.8O nanoparticles via bottom-up approaches.
Once the synthesis of these materials can be made straightforward, any further modification, combination, manipulation, is in principle possible and new systems can be purposely conceived (e.g. hybrids, nanocomposites, ferrofluids, etc).
Biogene Amine sind kleine organische Verbindungen, die sowohl bei Wirbeltieren als auch bei Wirbellosen als Neurotransmitter, Neuromodulatoren und/oder Neurohormone wirken können. Sie bilden eine bedeutende Gruppe von Botenstoffen und entfalten ihre Wirkungen über die Bindung an eine bestimmte Klasse von Rezeptorproteinen, die als G-Protein-gekoppelte Rezeptoren bezeichnet werden. Bei Insekten gehören zur Substanzklasse der biogenen Amine die Botenstoffe Dopamin, Tyramin, Octopamin, Serotonin und Histamin. Neben vielen anderen Wirkung ist z.B. gezeigt worden, daß einige dieser biogenen Amine bei der Honigbiene (Apis mellifera) die Geschmacksempfindlichkeit für Zuckerwasser-Reize modulieren können. Ich habe verschiedene Aspekte der aminergen Signaltransduktion an den „Modellorganismen“ Honigbiene und Amerikanische Großschabe (Periplaneta americana) untersucht. Aus der Honigbiene, einem „Modellorganismus“ für das Studium von Lern- und Gedächtnisvorgängen, wurden zwei Dopamin-Rezeptoren, ein Tyramin-Rezeptor, ein Octopamin-Rezeptor und ein Serotonin-Rezeptor charakterisiert. Die Rezeptoren wurden in kultivierten Säugerzellen exprimiert, um ihre pharmakologischen und funktionellen Eigenschaften (Kopplung an intrazelluläre Botenstoffwege) zu analysieren. Weiterhin wurde mit Hilfe verschiedener Techniken (RT-PCR, Northern-Blotting, in situ-Hybridisierung) untersucht, wo und wann während der Entwicklung die entsprechenden Rezeptor-mRNAs im Gehirn der Honigbiene exprimiert werden. Als Modellobjekt zur Untersuchung der zellulären Wirkungen biogener Amine wurden die Speicheldrüsen der Amerikanischen Großschabe genutzt. An isolierten Speicheldrüsen läßt sich sowohl mit Dopamin als auch mit Serotonin Speichelproduktion auslösen, wobei Speichelarten unterschiedlicher Zusammensetzung gebildet werden. Dopamin induziert die Bildung eines völlig proteinfreien, wäßrigen Speichels. Serotonin bewirkt die Sekretion eines proteinhaltigen Speichels. Die Serotonin-induzierte Proteinsekretion wird durch eine Erhöhung der Konzentration des intrazellulären Botenstoffs cAMP vermittelt. Es wurden die pharmakologischen Eigenschaften der Dopamin-Rezeptoren der Schaben-Speicheldrüsen untersucht sowie mit der molekularen Charakterisierung putativer aminerger Rezeptoren der Schabe begonnen. Weiterhin habe ich das ebony-Gen der Schabe charakterisiert. Dieses Gen kodiert für ein Enzym, das wahrscheinlich bei der Schabe (wie bei anderen Insekten) an der Inaktivierung biogener Amine beteiligt ist und im Gehirn und in den Speicheldrüsen der Schabe exprimiert wird.
The habilitation thesis covers theoretical investigations on light-induced processes in molecules. The study is focussed on changes of the molecular electronic structure and geometry, caused either by photoexcitation in the event of a spectroscopic analysis, or by a selective control with shaped laser pulses. The applied and developed methods are predominantly based on quantum chemistry as well as on electron and nuclear quantum dynamics, and in parts on molecular dynamics. The studied scientific problems deal with stereoisomerism and the question of how to either switch or distinguish chiral molecules using laser pulses, and with the essentials for the simulation of the spectroscopic response of biochromophores, in order to unravel their photophysics. The accomplished findings not only explain experimental results and extend existing approaches, but also contribute significantly to the basic understanding of the investigated light-driven molecular processes. The main achievements can be divided in three parts: First, a quantum theory for an enantio- and diastereoselective or, in general, stereoselective laser pulse control was developed and successfully applied to influence the chirality of molecular switches. The proposed axially chiral molecules possess different numbers of "switchable" stable chiral conformations, with one particular switch featuring even a true achiral "off"-state which allows to enantioselectively "turn on" its chirality. Furthermore, surface mounted chiral molecular switches with several well-defined orientations were treated, where a newly devised highly flexible stochastic pulse optimization technique provides high stereoselectivity and efficiency at the same time, even for coupled chirality-changing degrees of freedom. Despite the model character of these studies, the proposed types of chiral molecular switches and, all the more, the developed basic concepts are generally applicable to design laser pulse controlled catalysts for asymmetric synthesis, or to achieve selective changes in the chirality of liquid crystals or in chiroptical nanodevices, implementable in information processing or as data storage. Second, laser-driven electron wavepacket dynamics based on ab initio calculations, namely time-dependent configuration interaction, was extended by the explicit inclusion of magnetic field-magnetic dipole interactions for the simulation of the qualitative and quantitative distinction of enantiomers in mass spectrometry by means of circularly polarized ultrashort laser pulses. The developed approach not only allows to explain the origin of the experimentally observed influence of the pulse duration on the detected circular dichroism in the ion yield, but also to predict laser pulse parameters for an optimal distinction of enantiomers by ultrashort shaped laser pulses. Moreover, these investigations in combination with the previous ones provide a fundamental understanding of the relevance of electric and magnetic interactions between linearly or non-linearly polarized laser pulses and (pro-)chiral molecules for either control by enantioselective excitation or distinction by enantiospecific excitation. Third, for selected light-sensitive biological systems of central importance, like e.g. antenna complexes of photosynthesis, simulations of processes which take place during and after photoexcitation of their chromophores were performed, in order to explain experimental (spectroscopic) findings as well as to understand the underlying photophysical and photochemical principles. In particular, aspects of normal mode mixing due to geometrical changes upon photoexcitation and their impact on (time-dependent) vibronic and resonance Raman spectra, as well as on intramolecular energy redistribution were addressed. In order to explain unresolved experimental findings, a simulation program for the calculation of vibronic and resonance Raman spectra, accounting for changes in both vibrational frequencies and normal modes, was created based on a time-dependent formalism. In addition, the influence of the biochemical environment on the electronic structure of the chromophores was studied by electrostatic interactions and mechanical embedding using hybrid quantum-classical methods. Environmental effects were found to be of importance, in particular, for the excitonic coupling of chromophores in light-harvesting complex II. Although the simulations for such highly complex systems are still restricted by various approximations, the improved approaches and obtained results have proven to be important contributions for a better understanding of light-induced processes in biosystems which also adds to efforts of their artificial reproduction.
The uptake of nutrients and their subsequent chemical conversion by reactions which provide energy and building blocks for growth and propagation is a fundamental property of life. This property is termed metabolism. In the course of evolution life has been dependent on chemical reactions which generate molecules that are common and indispensable to all life forms. These molecules are the so-called primary metabolites. In addition, life has evolved highly diverse biochemical reactions. These reactions allow organisms to produce unique molecules, the so-called secondary metabolites, which provide a competitive advantage for survival. The sum of all metabolites produced by the complex network of reactions within an organism has since 1998 been called the metabolome. The size of the metabolome can only be estimated and may range from less than 1,000 metabolites in unicellular organisms to approximately 200,000 in the whole plant kingdom. In current biology, three additional types of molecules are thought to be important to the understanding of the phenomena of life: (1) the proteins, in other words the proteome, including enzymes which perform the metabolic reactions, (2) the ribonucleic acids (RNAs) which constitute the so-called transcriptome, and (3) all genes of the genome which are encoded within the double strands of desoxyribonucleic acid (DNA). Investigations of each of these molecular levels of life require analytical technologies which should best enable the comprehensive analysis of all proteins, RNAs, et cetera. At the beginning of this thesis such analytical technologies were available for DNA, RNA and proteins, but not for metabolites. Therefore, this thesis was dedicated to the implementation of the gas chromatography – mass spectrometry technology, in short GC-MS, for the in-parallel analysis of as many metabolites as possible. Today GC-MS is one of the most widely applied technologies and indispensable for the efficient profiling of primary metabolites. The main achievements and research topics of this work can be divided into technological advances and novel insights into the metabolic mechanisms which allow plants to cope with environmental stresses. Firstly, the GC-MS profiling technology has been highly automated and standardized. The major technological achievements were (1) substantial contributions to the development of automated and, within the limits of GC-MS, comprehensive chemical analysis, (2) contributions to the implementation of time of flight mass spectrometry for GC-MS based metabolite profiling, (3) the creation of a software platform for reproducible GC-MS data processing, named TagFinder, and (4) the establishment of an internationally coordinated library of mass spectra which allows the identification of metabolites in diverse and complex biological samples. In addition, the Golm Metabolome Database (GMD) has been initiated to harbor this library and to cope with the increasing amount of generated profiling data. This database makes publicly available all chemical information essential for GC-MS profiling and has been extended to a global resource of GC-MS based metabolite profiles. Querying the concentration changes of hundreds of known and yet non-identified metabolites has recently been enabled by uploading standardized, TagFinder-processed data. Long-term technological aims have been pursued with the central aims (1) to enhance the precision of absolute and relative quantification and (2) to enable the combined analysis of metabolite concentrations and metabolic flux. In contrast to concentrations which provide information on metabolite amounts, flux analysis provides information on the speed of biochemical reactions or reaction sequences, for example on the rate of CO2 conversion into metabolites. This conversion is an essential function of plants which is the basis of life on earth. Secondly, GC-MS based metabolite profiling technology has been continuously applied to advance plant stress physiology. These efforts have yielded a detailed description of and new functional insights into metabolic changes in response to high and low temperatures as well as common and divergent responses to salt stress among higher plants, such as Arabidopsis thaliana, Lotus japonicus and rice (Oryza sativa). Time course analysis after temperature stress and investigations into salt dosage responses indicated that metabolism changed in a gradual manner rather than by stepwise transitions between fixed states. In agreement with these observations, metabolite profiles of the model plant Lotus japonicus, when exposed to increased soil salinity, were demonstrated to have a highly predictive power for both NaCl accumulation and plant biomass. Thus, it may be possible to use GC-MS based metabolite profiling as a breeding tool to support the selection of individual plants that cope best with salt stress or other environmental challenges.
Biological materials have ever been used by humans because of their remarkable properties. This is surprising since the materials are formed under physiological conditions and with commonplace constituents. Nature thus not only provides us with inspiration for designing new materials but also teaches us how to use soft molecules to tune interparticle and external forces to structure and assemble simple building blocks into functional entities. Magnetotactic bacteria and their chain of magnetosomes represent a striking example of such an accomplishment where a very simple living organism controls the properties of inorganics via organics at the nanometer-scale to form a single magnetic dipole that orients the cell in the Earth magnetic field lines. My group has developed a biological and a bio-inspired research based on these bacteria. My research, at the interface between chemistry, materials science, physics, and biology focuses on how biological systems synthesize, organize and use minerals. We apply the design principles to sustainably form hierarchical materials with controlled properties that can be used e.g. as magnetically directed nanodevices towards applications in sensing, actuating, and transport. In this thesis, I thus first present how magnetotactic bacteria intracellularly form magnetosomes and assemble them in chains. I developed an assay, where cells can be switched from magnetic to non-magnetic states. This enabled to study the dynamics of magnetosome and magnetosome chain formation. We found that the magnetosomes nucleate within minutes whereas chains assembles within hours. Magnetosome formation necessitates iron uptake as ferrous or ferric ions. The transport of the ions within the cell leads to the formation of a ferritin-like intermediate, which subsequently is transported and transformed within the magnetosome organelle in a ferrihydrite-like precursor. Finally, magnetite crystals nucleate and grow toward their mature dimension. In addition, I show that the magnetosome assembly displays hierarchically ordered nano- and microstructures over several levels, enabling the coordinated alignment and motility of entire populations of cells. The magnetosomes are indeed composed of structurally pure magnetite. The organelles are partly composed of proteins, which role is crucial for the properties of the magnetosomes. As an example, we showed how the protein MmsF is involved in the control of magnetosome size and morphology. We have further shown by 2D X-ray diffraction that the magnetosome particles are aligned along the same direction in the magnetosome chain. We then show how magnetic properties of the nascent magnetosome influence the alignment of the particles, and how the proteins MamJ and MamK coordinate this assembly. We propose a theoretical approach, which suggests that biological forces are more important than physical ones for the chain formation. All these studies thus show how magnetosome formation and organization are under strict biological control, which is associated with unprecedented material properties. Finally, we show that the magnetosome chain enables the cells to find their preferred oxygen conditions if the magnetic field is present. The synthetic part of this work shows how the understanding of the design principles of magnetosome formation enabled me to perform biomimetic synthesis of magnetite particles within the highly desired size range of 25 to 100 nm. Nucleation and growth of such particles are based on aggregation of iron colloids termed primary particles as imaged by cryo-high resolution TEM. I show how additives influence magnetite formation and properties. In particular, MamP, a so-called magnetochrome proteins involved in the magnetosome formation in vivo, enables the in vitro formation of magnetite nanoparticles exclusively from ferrous iron by controlling the redox state of the process. Negatively charged additives, such as MamJ, retard magnetite nucleation in vitro, probably by interacting with the iron ions. Other additives such as e.g. polyarginine can be used to control the colloidal stability of stable-single domain sized nanoparticles. Finally, I show how we can “glue” magnetic nanoparticles to form propellers that can be actuated and swim with the help of external magnetic fields. We propose a simple theory to explain the observed movement. We can use the theoretical framework to design experimental conditions to sort out the propellers depending on their size and effectively confirm this prediction experimentally. Thereby, we could image propellers with size down to 290 nm in their longer dimension, much smaller than what perform so far.
Kaliumionen (K<sup>+) sind die am häufigsten vorkommenden anorganischen Kationen in Pflanzen. Gemessen am Trockengewicht kann ihr Anteil bis zu 10% ausmachen. Kaliumionen übernehmen wichtige Funktionen in verschiedenen Prozessen in der Pflanze. So sind sie z.B. essentiell für das Wachstum und für den Stoffwechsel. Viele wichtige Enzyme arbeiten optimal bei einer K<sup>+ Konzentration im Bereich von 100 mM. Aus diesem Grund halten Pflanzenzellen in ihren Kompartimenten, die am Stoffwechsel beteiligt sind, eine kontrollierte Kaliumkonzentration von etwa 100 mM aufrecht. Die Aufnahme von Kaliumionen aus dem Erdreich und deren Transport innerhalb der Pflanze und innerhalb einer Pflanzenzelle wird durch verschiedene Kaliumtransportproteine ermöglicht. Die Aufrechterhaltung einer stabilen K<sup>+ Konzentration ist jedoch nur möglich, wenn die Aktivität dieser Transportproteine einer strikten Kontrolle unterliegt. Die Prozesse, die die Transportproteine regulieren, sind bis heute nur ansatzweise verstanden. Detailliertere Kenntnisse auf diesem Gebiet sind aber von zentraler Bedeutung für das Verständnis der Integration der Transportproteine in das komplexe System des pflanzlichen Organismus. In dieser Habilitationsschrift werden eigene Publikationen zusammenfassend dargestellt, in denen die Untersuchungen verschiedener Regulationsmechanismen pflanzlicher Kaliumkanäle beschrieben werden. Diese Untersuchungen umfassen ein Spektrum aus verschiedenen proteinbiochemischen, biophysikalischen und pflanzenphysiologischen Analysen. Um die Regulationsmechanismen grundlegend zu verstehen, werden zum einen ihre strukturellen und molekularen Besonderheiten untersucht. Zum anderen werden die biophysikalischen und reaktionskinetischen Zusammenhänge der Regulationsmechanismen analysiert. Die gewonnenen Erkenntnisse erlauben eine neue, detailliertere Interpretation der physiologischen Rolle der Kaliumtransportproteine in der Pflanze.
Biopsychosoziale Aspekte der beruflichen Wiedereingliederung nach kardiologischer Rehabilitation
(2020)
Bischöfe im Frankenreich waren einflussreiche politische Akteure, die im Laufe des 9. Jahrhunderts ein gelehrtes Wissen vom eigenen Amt entwickelten. Spiegelungen dieses Wissens über das Wesen des Bischofsamtes finden sich in vielen Texten, die meisten stammen aus Westfranken. Offen ist bislang jedoch, welche Relevanz dieses Wissen und das bischöfliche Standesbewusstsein hatten – ist es als normativer Referenzrahmen von anderen politisch relevanten Ständen anerkannt? Wie entwickelt es sich über die Umbruchzeit des 10. Jahrhunderts in der post-karolingischen Zeit und beginnenden Kirchenreform? Diesen Fragen widmet sich das Buch durch eine Untersuchung von Bischofsabsetzungen in Westfranken im 9. und 10. Jahrhundert und durch eine Analyse des Bischofsbildes in monastischen wie bischöflichen Kreisen im 10. und frühen 11. Jahrhundert. So kann ein differenziertes Bild der Wahrnehmung des Bischofsamtes und der konkrete Umgang mit dem Wissen vom Bischofsamt in verschiedenen Kontexten gezeichnet werden.
Carbon nitride semiconductors: properties and application as photocatalysts in organic synthesis
(2023)
Graphitic carbon nitrides (g-CNs) are represented by melon-type g-CN, poly(heptazine imides) (PHIs), triazine-based g-CN and poly(triazine imide) with intercalated LiCl (PTI/Li+Cl‒). These materials are composed of sp2-hybridized carbon and nitrogen atoms; C:N ratio is close to 3:4; the building unit is 1,3,5-triazine or tri-s-triazine; the building units are interconnected covalently via sp2-hybridized nitrogen atoms or NH-moieties; the layers are assembled into a stack via weak van der Waals forces as in graphite. Due to medium band gap (~2.7 eV) g-CNs, such as melon-type g-CN and PHIs, are excited by photons with wavelength ≤ 460 nm. Since 2009 g-CNs have been actively studied as photocatalysts in evolution of hydrogen and oxygen – two half-reactions of full water splitting, by employing corresponding sacrificial agents. At the same time application of g-CNs as photocatalysts in organic synthesis has been remaining limited to few reactions only. Cumulative Habilitation summarizes research work conducted by the group ‘Innovative Heterogeneous Photocatalysis’ between 2017-2023 in the field of carbon nitride organic photocatalysis, which is led by Dr. Oleksandr Savatieiev.
g-CN photocatalysts activate molecules, i.e. generate their more reactive open-shell intermediates, via three modes: i) Photoinduced electron transfer (PET); ii) Excited state proton-coupled electron transfer (ES-PCET) or direct hydrogen atom transfer (dHAT); iii) Energy transfer (EnT). The scope of reactions that proceed via oxidative PET, i.e. one-electron oxidation of a substrate to the corresponding radical cation, are represented by synthesis of sulfonylchlorides from S-acetylthiophenols. The scope of reactions that proceed via reductive PET, i.e. one-electron reduction of a substrate to the corresponding radical anion, are represented by synthesis of γ,γ-dichloroketones from the enones and chloroform.
Due to abundance of sp2-hybridized nitrogen atoms in the structure of g-CN materials, they are able to cleave X-H bonds in organic molecules and store temporary hydrogen atom. ES-PCET or dHAT mode of organic molecules activation to the corresponding radicals is implemented for substrates featuring relatively acidic X-H bonds and those that are characterized by low bond dissociation energy, such as C-H bond next to the heteroelements. On the other hand, reductively quenched g-CN carrying hydrogen atom reduces a carbonyl compound to the ketyl radical via PCET that is thermodynamically more favorable pathway compared to the electron transfer. The scope of these reactions is represented by cyclodimerization of α,β-unsaturated ketones to cyclopentanoles.
g-CN excited state demonstrates complex dynamics with the initial formation of singlet excited state, which upon intersystem crossing produces triplet excited state that is characterized by the lifetime > 2 μs. Due to long lifetime, g-CN activate organic molecules via EnT. For example, g-CN sensitizes singlet oxygen, which is the key intermediate in the dehydrogenation of aldoximes to nitrileoxides. The transient nitrileoxide undergoes [3+2]-cycloaddition to nitriles and gives oxadiazoles-1,2,4.
PET, ES-PCET and EnT are fundamental phenomena that are applied beyond organic photocatalysis. Hybrid composite is formed by combining conductive polymers, such as poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) with potassium poly(heptazine imide) (K-PHI). Upon PET, K-PHI modulated population of polarons and therefore conductivity of PEDOT:PSS. The initial state of PEDOT:PSS is recovered upon material exposure to O2. K-PHI:PEDOT:PSS may be applied in O2 sensing.
In the presence of electron donors, such as tertiary amines and alcohols, and irradiation with light, K-PHI undergoes photocharging – the g-CN material accumulates electrons and charge-compensating cations. Such photocharged state is stable under anaerobic conditions for weeks, but at the same time it is a strong reductant. This feature allows decoupling in time light harvesting and energy storage in the form of electron-proton couples from utilization in organic synthesis. The photocharged state of K-PHI reduces nitrobenzene to aniline, and enables dimerization of α,β-unsaturated ketones to hexadienones in dark.
Electrets are materials capable of storing oriented dipoles or an electric surplus charge for long periods of time. The term "electret" was coined by Oliver Heaviside in analogy to the well-known word "magnet". Initially regarded as a mere scientific curiosity, electrets became increasingly imporant for applications during the second half of the 20th century. The most famous example is the electret condenser microphone, developed in 1962 by Sessler and West. Today, these devices are produced in annual quantities of more than 1 billion, and have become indispensable in modern communications technology. Even though space-charge electrets are widely used in transducer applications, relatively little was known about the microscopic mechanisms of charge storage. It was generally accepted that the surplus charges are stored in some form of physical or chemical traps. However, trap depths of less than 2 eV, obtained via thermally stimulated discharge experiments, conflicted with the observed lifetimes (extrapolations of experimental data yielded more than 100000 years). Using a combination of photostimulated discharge spectroscopy and simultaneous depth-profiling of the space-charge density, the present work shows for the first time that at least part of the space charge in, e.g., polytetrafluoroethylene, polypropylene and polyethylene terephthalate is stored in traps with depths of up to 6 eV, indicating major local structural changes. Based on this information, more efficient charge-storing materials could be developed in the future. The new experimental results could only be obtained after several techniques for characterizing the electrical, electromechanical and electrical properties of electrets had been enhanced with in situ capability. For instance, real-time information on space-charge depth-profiles were obtained by subjecting a polymer film to short laser-induced heat pulses. The high data acquisition speed of this technique also allowed the three-dimensional mapping of polarization and space-charge distributions. A highly active field of research is the development of piezoelectric sensor films from electret polymer foams. These materials store charges on the inner surfaces of the voids after having been subjected to a corona discharge, and exhibit piezoelectric properties far superior to those of traditional ferroelectric polymers. By means of dielectric resonance spectroscopy, polypropylene foams (presently the most widely used ferroelectret) were studied with respect to their thermal and UV stability. Their limited thermal stability renders them unsuitable for applications above 50 °C. Using a solvent-based foaming technique, we found an alternative material based on amorphous Teflon® AF, which exhibits a stable piezoelectric coefficient of 600 pC/N at temperatures up to 120 °C.
We theoretically discuss the interaction of neutral particles (atoms, molecules) with surfaces in the regime where it is mediated by the electromagnetic field. A thorough characterization of the field at sub-wavelength distances is worked out, including energy density spectra and coherence functions. The results are applied to typical situations in integrated atom optics, where ultracold atoms are coupled to a thermal surface, and to single molecule probes in near field optics, where sub-wavelength resolution can be achieved.
Computational cosmology
(2008)
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.
Parsability approaches of several grammar formalisms generating also non-context-free languages are explored. Chomsky grammars, Lindenmayer systems, grammars with controlled derivations, and grammar systems are treated. Formal properties of these mechanisms are investigated, when they are used as language acceptors. Furthermore, cooperating distributed grammar systems are restricted so that efficient deterministic parsing without backtracking becomes possible. For this class of grammar systems, the parsing algorithm is presented and the feature of leftmost derivations is investigated in detail.
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging.
This habilitation thesis includes seven case studies that examine climate variability during the past 3.5 million years from different temporal and spatial perspectives. The main geographical focus is on the climatic events of the of the African and Asian monsoonal system, the North Atlantic as well as the Arctic Ocean. The results of this study are based on marine and terrestrial climate archives obtained by sedimentological and geochemical methods, and subsequently analyzed by various statistical methods.
The results herein presented results provide a picture of the climatic background conditions of past cold and warm periods, the sensitivity of past climatic climate phases in relation to changes in the atmospheric carbon dioxide content, and the tight linkage between the low and high latitude climate system. Based on the results, it is concluded that a warm background climate state strongly influenced and/or partially reversed the linear relationships between individual climate processes that are valid today. Also, the driving force of the low latitudes for climate variability of the high latitudes is emphasized in the present work, which is contrary to the conventional view that the global climate change of the past 3.5 million years was predominantly controlled by the high latitude climate variability. Furthermore, it is found that on long geologic time scales (>1000 years to millions of years), solar irradiance variability due to changes in the Earth-Sun-Moon System may have increased the sensitivity of low and high latitudes to Influenced changes in atmospheric carbon dioxide.
Taken together, these findings provide new insights into the sensitivity of past climate phases and provide new background conditions for numerical models, that predict future climate change.
Gravity dictates the structure of the whole Universe and, although it is triumphantly described by the theory of General Relativity, it is the force that we least understand in nature. One of the cardinal predictions of this theory are black holes. Massive, dark objects are found in the majority of galaxies. Our own galactic center very contains such an object with a mass of about four million solar masses. Are these objects supermassive black holes (SMBHs), or do we need alternatives? The answer lies in the event horizon, the characteristic that defines a black hole. The key to probe the horizon is to model the movement of stars around a SMBH, and the interactions between them, and look for deviations from real observations. Nuclear star clusters harboring a massive, dark object with a mass of up to ~ ten million solar masses are good testbeds to probe the event horizon of the potential SMBH with stars. The channel for interactions between stars and the central MBH are the fact that (a) compact stars and stellar-mass black holes can gradually inspiral into the SMBH due to the emission of gravitational radiation, which is known as an “Extreme Mass Ratio Inspiral” (EMRI), and (b) stars can produce gases which will be accreted by the SMBH through normal stellar evolution, or by collisions and disruptions brought about by the strong central tidal field. Such processes can contribute significantly to the mass of the SMBH. These two processes involve different disciplines, which combined will provide us with detailed information about the fabric of space and time. In this habilitation I present nine articles of my recent work directly related with these topics.
Die Arbeit stellt die Funktionsweise und den Erwerb der deutschen Groß- und Kleinschreibung auf theoretischer und empirischer Grundlage dar. Den Ausgangspunkt bildet eine textpragmatische Verallgemeinerung bisheriger graphematischer Ansätze, die zu einem übergreifenden Modell des Majuskelgebrauchs im Deutschen erweitert werden und dabei auch nicht-orthografische Teilbereiche einschließen (Versalsatz, Kapitälchen, Binnenmajuskel etc.).
Im empirischen Teil der Arbeit werden die orthografischen Leistungsdaten von ca. 5.700 Probanden verschiedener Altersklassen (4. Klasse bis Erwachsenenbildung) untersucht und zu einem allgemeinen Erwerbsmodell der Groß- und Kleinschreibung ausgebaut. Mit Hilfe neuronaler Netzwerksimulationen werden unterschiedliche Lernertypen unterschieden und Diskontinuitäten im Kompetenzerwerb nachgewiesen, die auf qualitative Strategiewechsel in der Ontogenese hindeuten. Den Abschluss bilden orthografiedidaktische und rechtschreibdiagnostische Reflexionen der Daten.
Devotio malefica
(2021)
Antike Fluchrituale zielten darauf ab, die jeweilige Gerechtigkeitsvorstellung der Verfluchenden durchzusetzen – insbesondere wenn weder das öffentliche Justizsystem noch gesellschaftlich anerkannte Verhaltenskodize dem Anspruch gerecht werden konnten. In den Ritualen kamen sogenannte defixionis tabellae (Fluchtafeln) zur Anwendung, die hier devotiones maleficae genannt werden. Sie bestehen meistens aus eingeschriebenen Bleilamellen und wurden für die Beschädigung eines oder mehrerer Opfer angefertigt.
Sara Chiarini untersucht die dabei verwendete Fluchsprache, die durch ihre formelhaften Strukturen und Bestandteile auf eine Tradition des Fluchrituals hindeuten. Individuelle Ergänzungen bieten hingegen Hinweise auf die Bedingungen um die Entstehung des Rituals, die Gefühlslage der Verfluchenden und die Arten von Bestrafungen, die der rechtlichen Dimension des Rituals entsprechen. Chiarini ergänzt den bisherigen Forschungstand anhand der neu entdeckten und veröffentlichten Fluchtafeln und setzt sich umfassend mit diesem epigraphischen Material auseinander.
Es ist schon seit längerer Zeit bekannt, dass nach Kontakt des Biomaterials mit der biologischen Umgebung bei Implantation oder extrakorporaler Wechselwirkung zunächst Proteine aus dem umgebenden Milieu adsorbiert werden, wobei die Oberflächeneigenschaften des Materials die Zusammensetzung der Proteinschicht und die Konformation der darin enthaltenden Proteine determinieren. Die nachfolgende Wechselwirkung von Zellen mit dem Material wird deshalb i.d.R. von der Adsorbatschicht vermittelt. Der Einfluss der Oberflächen auf die Zusammensetzung und Konformation der Proteine und die nachfolgende Wechselwirkung mit Zellen ist von besonderem Interesse, da einerseits eine Aussage über die Anwendbarkeit ermöglicht wird, andererseits Erkenntnisse über diese Zusammenhänge für die Entwicklung neuer Materialien mit verbesserter Biokompatibilität genutzt werden können. In der vorliegenden Habilitationsschrift wurde deshalb der Einfluss der Zusammensetzung von Polymeren bzw. von deren Oberflächeneigenschaften auf die Adsorption von Proteinen, den Aktivitätszustand der plasmatischen Gerinnung und die Adhäsion von Zellen untersucht. Dabei wurden auch Möglichkeiten zur Beeinflussung dieser Vorgänge über eine Veränderung der Volumenzusammensetzung oder durch Oberflächenmodifikationen von Biomaterialien vorgestellt. Erkenntnisse aus diesen Arbeiten konnten für die Entwicklung von Membranen für Biohybrid-Organe genutzt werden.
In der vorliegenden Arbeit werden verschiedene Experimente zur Untersuchung der elektrischen Leitfähigkeit von Sutur- und Kollisionszonen im Zusammenhang diskutiert, um die Möglichkeiten, die die moderne Magnetotellurik (MT) für das Abbilden fossiler tektonischer Systeme bietet, aufzuzeigen. Aus den neuen hochauflösenden Abbildern der elektrischen Leitfähigkeit können potentielle Gemeinsamkeiten verschiedener tektonischer Einheiten abgeleitet werden. Innerhalb der letzten Dekade haben sich durch die Weiterentwicklung der Messgeräte und der Auswerte- und Interpretationsmethoden völlig neue Perspektiven für die geodynamische Tiefensondierung ergeben. Dies wird an meinen Forschungsarbeiten deutlich, die ich im Rahmen von Projekten selbst eingeworben und am Deutschen GeoForschungsZentrum Potsdam durchgeführt habe. In Tabelle A habe ich die in dieser Arbeit berücksichtigten Experimente aufgeführt, die in den letzten Jahren entweder als Array- oder als Profilmessungen durchgeführt wurden. Für derart große Feldexperimente benötigt man ein Team von WissenschaftlerInnen, StudentInnen und technischem Personal. Das bedeutet aber auch, dass von mir betreute StudentInnen und DoktorandInnen Teilaspekte dieser Experimente in Form von Diplom-, Bachelor- und Mastersarbeiten oder Promotionsschriften verarbeitet haben. Bei anschließender Veröffentlichung der Arbeiten habe ich als Co-Autor mitgewirkt. Die beiliegenden Veröffentlichungen enthalten eine Einführung in die Methode der Magnetotellurik und gegebenenfalls die Beschreibung neu entwickelter Methoden. Eine allgemeine Darstellung der theoretischen Grundlagen der Magnetotellurik findet man zum Beispiel in Chave & Jones (2012); Simpson & Bahr (2005); Kaufman & Keller (1981); Nabighian (1987); Weaver (1994). Die Arbeit beinhaltet zudem ein Glossar, in dem einige Begriffe und Abkürzungen erklärt werden. Ich habe mich entschieden, Begriffe, für die es keine adäquate deutsche Übersetzung gibt oder die im Deutschen eine andere oder missverständliche Bedeutung bekommen, auf Englisch in der Arbeit zu belassen. Sie sind durch eine kursive Schreibweise gekennzeichnet.
Die Koloniale Karibik
(2012)
Werden nicht in der Karibik des 19. Jahrhunderts Phänomene und Prozesse vorweg-genommen, die heute erst ins Bewusstsein gelangen? Der Blick auf die kaleidoskopartige Welt der Karibik über literarische und kulturelle Transprozesse in jener Epoche erlaubt völlig neue Einsichten in die frühen Prozesse der kulturellen Globalisierung. Rassistische Diskurse, etablierte Modelle „weißer“ Abolitionisten, Erinnerungspolitiken und die bisher kaum wahrgenommene Rolle der haitianischen Revolution verbinden sich zu einem Amalgam, das unser gängiges Konzept einer genuin westlichen Moderne in Frage stellt.
Die vorliegende Arbeit versammelt zwei einleitende Kapitel und zehn Essays, die sich als kritisch-konstruktive Beiträge zu einem "erlebenden Verstehen" (Buck) von Physik lesen lassen. Die traditionelle Anlage von Schulphysik zielt auf eine systematische Darstellung naturwissenschaftlichen Wissens, das dann an ausgewählten Beispielen angewendet wird: Schulexperimente beweisen die Aussagen der Systematik (oder machen sie wenigstens plausibel), ausgewählte Phänomene werden erklärt. In einem solchen Rahmen besteht jedoch leicht die Gefahr, den Bezug zur Lebenswirklichkeit oder den Interessen der Schüler zu verlieren. Diese Problematik ist seit mindestens 90 Jahren bekannt, didaktische Antworten - untersuchendes Lernen, Kontextualisierung, Schülerexperimente etc. - adressieren allerdings eher Symptome als Ursachen. Naturwissenschaft wird dadurch spannend, dass sie ein spezifisch investigatives Weltverhältnis stiftet: man müsste gleichsam nicht Wissen, sondern "Fragen lernen" (und natürlich auch, wie Antworten gefunden werden...). Doch wie kann dergleichen auf dem Niveau von Schulphysik aussehen, was für einen theoretischen Rahmen kann es hier geben? In den gesammelten Arbeiten wird einigen dieser Spuren nachgegangen: Der Absage an das zu modellhafte Denken in der phänomenologischen Optik, der Abgrenzung formal-mathematischen Denkens gegen wirklichkeitsnähere Formen naturwissenschaftlicher Denkbewegungen und Evidenz, dem Potential alternativer Interpretationen von "Physikunterricht", der Frage nach dem "Verstehen" u.a. Dabei werden nicht nur Bezüge zum modernen bildungstheoretischen Paradigma der Kompetenz sichtbar, sondern es wird auch versucht, eine ganze Reihe konkrete (schul-)physikalische Beispiele dafür zu geben, was passiert, wenn nicht schon gewusste Antworten Thema werden, sondern Expeditionen, die sich der physischen Welt widmen: Die Schlüsselbegriffe des Fachs, die Methoden der Datenerhebung und Interpretation, die Such- und Denkbewegungen kommen dabei auf eine Weise zur Sprache, die sich nicht auf die Fachsystematik abstützen möchte, sondern diese motivieren, konturieren und verständlich machen will.
Die Plastizität der Gefühle
(2021)
Das emotionale Leben wird zunehmend durch digitale Technologien ausgelesen, reguliert und produziert. Diese gleichermaßen von Hoffnungen und Ängsten begleitete Entwicklung ist die vorerst letzte Station einer bis in die Frühgeschichte zurückreichenden, tiefgehenden Verschränkung von Affekt und (Kultur-)Technik. Bernd Bösel eröffnet einen umfassenden genealogischen Blick auf die epochenmachenden Neujustierungen dieser Technisierung. Denn erst im Nachvollzug der verschiedenen Logiken des Verfügens über Affekte wird es möglich, die Verflechtung der Technisierungsformen zu verstehen, auf denen die Psychomacht der Gegenwart basiert.
Die verletzte Republik
(2022)
Die Studie stellt die Frage nach dem Beitrag erzählender Literatur zu einem Dialog über Formen der Gewalt im gesellschaftlichen Raum Frankreich zu Beginn des 21. Jahrhunderts.
Unter Rückgriff auf Bourdieu’sche Konzepte literatursoziologischer Theorie diskutiert sie zunächst die für ein sozialwissenschaftlich relevantes Erfassen des Wissens von Literatur notwendige Perspektive auf erzählte Gewalt. Bei dem dafür untersuchten Text-Korpus handelt es sich um vielrezipierte Erzähltexte des literarischen Feldes in Frankreich, welche größtenteils in der zweiten Dekade des 21. Jahrhunderts erschienen sind.
Ausgehend von theoretischen Überlegungen zu Grenzen und Möglichkeiten einer solchen feldsoziologischen Fokussierung auf die Literatur der unmittelbaren Gegenwart wird am konkreten Textmaterial und mit den Mitteln der Literaturwissenschaft untersucht, wie und warum die französische Literatur über unterschiedliche Formen von Gewalt, vom Erinnern an die historisch gewordenen Gewalttraumata des 20. Jahrhunderts, vom Terrorismus des 21. Jahrhunderts, von Rassismus und Klassismus der Gegenwart, von Femiziden und Homophobie, über «Abgehängte» in ländlichen Gebieten, aber auch im Zentrum der Metropole, über Arbeitslosigkeit und Armut in Frankreich erzählt.
Eröffnet werden soll eine komplementäre Perspektive der Literaturwissenschaft zur soziologischen und historischen Gewaltforschung über den gesellschaftlichen Raum unseres europäischen Nachbarn.
Over millennia, droughts could not be understood or defined but rather were associated with mystical connotations. To understand this natural hazards, we first needed to understand the laws of physics and then develop plausible explanations of inner workings of the hydrological cycle. Consequently, modeling and predicting droughts was out of the scope of mankind until the end of the last century. In recent studies, it is estimated that this natural hazard has caused billions of dollars in losses since 1900 and that droughts have affected 2.2 billion people worldwide between 1950 and 2014.
For these reasons, droughts have been identified by the IPCC as the trigger of a web of impacts across many sectors leading to land degradation, migration and substantial socio-economic costs. This thesis summarizes a decade of research carried out at the Helmholtz Centre for Environmental Research on the subject of drought monitoring, modeling, and forecasting, from local to continental scales. The overarching objectives of this study, systematically addressed in the twelve previous chapters, are: 1) Create the capability to seamless monitor and predict water fluxes at various spatial resolutions and temporal scales varying from days to centuries; 2) Develop and test a modeling chain for monitoring, forecasting and predicting drought events and related characteristics at national and continental scales; and 3) Develop drought indices and impact indicators that are useful for end-users. Key outputs of this study are: the development of the open source model mHM, the German Drought Monitor System, the proof of concept for an European multi-model for improving water managent from local to continental scales, and the prototype of a crop-yield drought impact model for Germany.
Earthquake faults interact with each other in many different ways and hence earthquakes cannot be treated as individual independent events. Although earthquake interactions generally lead to a complex evolution of the crustal stress field, it does not necessarily mean that the earthquake occurrence becomes random and completely unpredictable. In particular, the interplay between earthquakes can rather explain the occurrence of pronounced characteristics such as periods of accelerated and depressed seismicity (seismic quiescence) as well as spatiotemporal earthquake clustering (swarms and aftershock sequences). Ignoring the time-dependence of the process by looking at time-averaged values – as largely done in standard procedures of seismic hazard assessment – can thus lead to erroneous estimations not only of the activity level of future earthquakes but also of their spatial distribution. Therefore, it exists an urgent need for applicable time-dependent models. In my work, I aimed at better understanding and characterization of the earthquake interactions in order to improve seismic hazard estimations. For this purpose, I studied seismicity patterns on spatial scales ranging from hydraulic fracture experiments (meter to kilometer) to fault system size (hundreds of kilometers), while the temporal scale of interest varied from the immediate aftershock activity (minutes to months) to seismic cycles (tens to thousands of years). My studies revealed a number of new characteristics of fluid-induced and stress-triggered earthquake clustering as well as precursory phenomena in earthquake cycles. Data analysis of earthquake and deformation data were accompanied by statistical and physics-based model simulations which allow a better understanding of the role of structural heterogeneities, stress changes, afterslip and fluid flow. Finally, new strategies and methods have been developed and tested which help to improve seismic hazard estimations by taking the time-dependence of the earthquake process appropriately into account.
Ecce figura
(2023)
Worüber wir reden, wenn wir von Figuren reden, ist eine komplexe Fragestellung, die unterschiedliche Disziplinen berührt. Mit Erich Auerbachs figura/Mimesis-Projekt wurde die interdiszplinäre Forschung dieses Begriffs initiiert. Ob Literatur-, Bild- oder Wissensgeschichte – die Präsenz und Aktualität von figura in der romanistischen und komparatistischen Forschung bezeugt ein anhaltendes Interesse an der Theoriearbeit zwischen Theologie, Philosophie, Literatur- und Kunstwissenschaft. Allerdings fehlt bislang eine grundlegende methodologische Reflexion, die die interdisziplinären Aspekte gleichrangig berücksichtigt und zu einer gemeinsamen Arbeit am Begriff vereinigt.
Dieses Versäumnis zu beheben, ist Aufgabe der vorliegenden Arbeit. Ausgehend von Erich Auerbach, Walter Benjamin und Hannah Arendt verfolgt die Monographie in vergleichenden Konstellationen von der Antike bis in die Moderne die literatur- und kunsthistorischen, theologischen und philosophischen Spuren von figura, die zu einer Methode der literaturphilosophischen Figuralogie ausgebaut werden.
Ecce figura versteht sich als ein Kompendium interdisziplinärer Begriffsgeschichte zwischen Literatur, Philosophie und Theologie, das dazu einlädt, in neuen Konstellationen gelesen und erweitert zu werden.
Weltweit sind fast 40 % der Bevölkerung übergewichtig und die Prävalenz von Adipositas, Insulinresistenz und den resultierenden Folgeerkrankungen wie dem Metabolischen Syndrom und Typ-2-Diabetes steigt rapide an. Als häufigste Ursachen werden diätetisches Fehlverhalten und mangelnde Bewegung angesehen. Die nicht-alkoholische Fettlebererkrankung (NAFLD), deren Hauptcharakteristikum die exzessive Akkumulation von Lipiden in der Leber ist, korreliert mit dem Body Mass Index (BMI). NAFLD wird als hepatische Manifestation des Metabolischen Syndroms angesehen und ist inzwischen die häufigste Ursache für Leberfunktionsstörungen. Die Erkrankung umfasst sowohl die benigne hepatische Steatose (Fettleber) als auch die progressive Form der nicht-alkoholischen Steatohepatitis (NASH), bei der die Steatose von Entzündung und Fibrose begleitet ist. Die Ausbildung einer NASH erhöht das Risiko, ein hepatozelluläres Karzinom (HCC) zu entwickeln und kann zu irreversibler Leberzirrhose und terminalem Organversagen führen. Nahrungsbestandteile wie Cholesterol und Fett-reiche Diäten werden als mögliche Faktoren diskutiert, die den Übergang einer einfachen Fettleber zur schweren Verlaufsform der Steatohepatitis / NASH begünstigen. Eine Ausdehnung des Fettgewebes wird von Insulinresistenz und einer niedrig-gradigen chronischen Entzündung des Fettgewebes begleitet. Neben Endotoxinen aus dem Darm gelangen Entzündungsmediatoren aus dem Fettgewebe zur Leber. Als Folge werden residente Makrophagen der Leber, die Kupfferzellen, aktiviert, die eine Entzündungsantwort initiieren und weitere pro-inflammatorische Mediatoren freisetzen, zu denen Chemokine, Cytokine und Prostanoide wie Prostaglandin E2 (PGE2) gehören. In dieser Arbeit soll aufgeklärt werden, welchen Beitrag PGE2 an der Ausbildung von Insulinresistenz, hepatischer Steatose und Entzündung im Rahmen von Diät-induzierter NASH im komplexen Zusammenspiel mit der Regulation der Cytokin-Produktion und anderen Co-Faktoren wie Hyperinsulinämie und Hyperlipidämie hat. In murinen und humanen Makrophagen-Populationen wurde untersucht, welche Faktoren die Bildung von PGE2 fördern und wie PGE2 die Entzündungsantwort aktivierter Makrophagen reguliert. In primären Hepatozyten der Ratte sowie in isolierten humanen Hepatozyten und Zelllinien wurde der Einfluss von PGE2 allein und in Kombination mit Cytokinen, deren Bildung durch PGE2 beeinflusst werden kann, auf die Insulin-abhängige Regulation des Glucose- und Lipid-stoffwechsels untersucht. Um den Einfluss von PGE2 im komplexen Zusammenspiel der Zelltypen in der Leber und im Gesamtorganismus zu erfassen, wurden Mäuse, in denen die PGE2-Synthese durch die Deletion der mikrosomalen PGE-Synthase 1 (mPGES1) vermindert war, mit einer NASH-induzierenden Diät gefüttert. In Lebern von Patienten mit NASH oder in Mäusen mit Diät-induzierter NASH war die Expression der PGE2-synthetisierenden Enzyme Cyclooxygenase 2 (COX2) und mPGES1 sowie die Bildung von PGE2 im Vergleich zu gesunden Kontrollen gesteigert und korrelierte mit dem Schweregrad der Lebererkrankung. In primären Makrophagen aus den Spezies Mensch, Maus und Ratte sowie in humanen Makrophagen-Zelllinien war die Bildung pro-inflammatorischer Mediatoren wie Chemokinen, Cytokinen und Prostaglandinen wie PGE2 verstärkt, wenn die Zellen mit Endotoxinen wie Lipopolysaccharid (LPS), Fettsäuren wie Palmitinsäure, Cholesterol und Cholesterol-Kristallen oder Insulin, das als Folge der kompensatorischen Hyperinsulinämie bei Insulinresistenz verstärkt freigesetzt wird, inkubiert wurden. Insulin steigerte dabei synergistisch mit LPS oder Palmitinsäure die Synthese von PGE2 sowie der anderen Entzündungsmediatoren wie Interleukin (IL) 8 und IL-1β. PGE2 reguliert die Entzündungsantwort: Neben der Induktion der eigenen Synthese-Enzyme verstärkte PGE2 die Expression der Immunzell-rekrutierenden Chemokine IL-8 und (C-C-Motiv)-Ligand 2 (CCL2) sowie die der pro-inflammatorischen Cytokine IL-1β und IL-6 in Makrophagen und kann so zur Verstärkung der Entzündungsreaktion beitragen. Außerdem förderte PGE2 die Bildung von Oncostatin M (OSM) und OSM induzierte in einer positiven Rückkopplungsschleife die Expression der PGE2-synthetisierenden Enzyme. Andererseits hemmte PGE2 die basale und LPS-vermittelte Bildung des potenten pro-inflammatorischen Cytokins Tumornekrosefaktor α (TNFα) und kann so die Entzündungsreaktion abschwächen. In primären Hepatozyten der Ratte und humanen Hepatozyten beeinträchtigte PGE2 direkt die Insulin-abhängige Aktivierung der Insulinrezeptor-Signalkette zur Steigerung der Glucose-Verwertung, in dem es durch Signalketten, die den verschiedenen PGE2-Rezeptoren nachgeschaltet sind, Kinasen wie ERK1/2 und IKKβ aktivierte und eine inhibierende Serin-Phosphorylierung der Insulinrezeptorsubstrate bewirkte. PGE2 verstärkte außerdem die IL-6- oder OSM-vermittelte Insulinresistenz und Steatose in primären Hepatozyten der Ratte. Die Wirkung von PGE2 im Gesamtorganismus sollte in Mäusen mit Diät-induzierter NASH untersucht werden. Die Fütterung einer Hochfett-Diät mit Schmalz als Fettquelle, das vor allem gesättigte Fettsäuren enthält, verursachte Fettleibigkeit, Insulinresistenz und eine hepatische Steatose in Wildtyp-Mäusen. In Tieren, die eine Hochfett-Diät mit Sojaöl als Fettquelle, das vor allem (ω-6)-mehrfach-ungesättigte Fettsäuren (PUFAs) enthält, oder eine Niedrigfett-Diät mit Cholesterol erhielten, war lediglich eine hepatische Steatose nachweisbar, jedoch keine verstärkte Gewichtszunahme im Vergleich zu Geschwistertieren, die eine Standard-Diät bekamen. Im Gegensatz dazu verursachte die Fütterung einer Hochfett-Diät mit PUFA-reichem Sojaöl als Fettquelle in Kombination mit Cholesterol sowohl Fettleibigkeit und Insulinresistenz als auch hepatische Steatose mit Hepatozyten-Hypertrophie, lobulärer Entzündung und beginnender Fibrose in Wildtyp-Mäusen. Diese Tiere spiegelten alle klinischen und histologischen Parameter der humanen NASH im Metabolischen Syndrom wider. Nur die Kombination von hohen Mengen ungesättigter Fettsäuren aus Sojaöl und Cholesterol in der Nahrung führte zu einer exzessiven Akkumulation des Cholesterols und der Bildung von Cholesterol-Kristallen in den Hepatozyten, die zur Schädigung der Mitochondrien, schwerem oxidativem Stress und schließlich zum Absterben der Zellen führten. Als Konsequenz phagozytieren Kupfferzellen die Zelltrümmer der Cholesterol-überladenen Hepatozyten, werden dadurch aktiviert, setzen Chemokine, Cytokine und PGE2 frei, die die Entzündungsreaktion verstärken und die Infiltration von weiteren Immunzellen initiieren können und verursachen so eine Progression zur Steatohepatitis (NASH). Die Deletion der mikrosomalen PGE-Synthase 1 (mPGES1), dem induzierbaren Enzym der PGE2-Synthese aus Cyclooxygenase-abhängigen Vorstufen, reduzierte die Diät-abhängige Bildung von PGE2 in der Leber. Die Fütterung der NASH-induzierenden Diät verursachte in Wildtyp- und mPGES1-defizienten Mäusen eine ähnliche Fettleibigkeit und Zunahme der Fettmasse sowie die Ausbildung von hepatischer Steatose mit Entzündung und Fibrose (NASH) im histologischen Bild. In mPGES1-defizienten Mäusen waren jedoch Parameter für die Infiltration von Entzündungszellen und die Diät-abhängige Schädigung der Leber im Vergleich zu Wildtyp-Tieren erhöht, was sich auch in einer stärkeren Diät-induzierten systemischen Insulinresistenz widerspiegelte. Die Bildung des pro-inflammatorischen und pro-apoptotischen Cytokins TNFα war in mPGES1-defizienten Mäusen durch die Aufhebung der negativen Rückkopplungshemmung verstärkt, was einen gesteigerten Diät-induzierten Zelluntergang gestresster Lipid-überladener Hepatozyten und eine nach-geschaltete Entzündungsantwort zur Folge hatte. Zusammenfassend wurde unter den gewählten Versuchsbedingungen in vivo eine anti-inflammatorische Rolle von PGE2 verifiziert, da das Prostanoid vor allem indirekt durch die Hemmung der TNFα-vermittelten Entzündungsreaktion die Schädigung der Leber, die Verstärkung der Entzündung und die Ausbildung von Insulinresistenz im Rahmen der Diät-abhängigen Fettlebererkrankung abschwächte.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
It has been known for several years that under certain conditions electrons can be confined within thin layers even if these layers consist of metal and are supported by a metal substrate. In photoelectron spectra, these layers show characteristic discrete energy levels and it has turned out that these lead to large effects like the oscillatory magnetic coupling technically exploited in modern hard disk reading heads. The current work asks in how far the concepts underlying quantization in two-dimensional films can be transferred to lower dimensionality. This problem is approached by a stepwise transition from two-dimensional layers to one-dimensional nanostructures. On the one hand, these nanostructures are represented by terraces on atomically stepped surfaces, on the other hand by atom chains which are deposited onto these terraces up to complete coverage by atomically thin nanostripes. Furthermore, self organization effects are used in order to arrive at perfectly one-dimensional atomic arrangements at surfaces. Angle-resolved photoemission is particularly suited as method of investigation because is reveals the behavior of the electrons in these nanostructures in dependence of the spacial direction which distinguishes it from, e. g., scanning tunneling microscopy. With this method intense and at times surprisingly large effects of one-dimensional quantization are observed for various exemplary systems, partly for the first time. The essential role of bandgaps in the substrate known from two-dimensional systems is confirmed for nanostructures. In addition, we reveal an ambiguity without precedent in two-dimensional layers between spacial confinement of electrons on the one side and superlattice effects on the other side as well as between effects caused by the sample and by the measurement process. The latter effects are huge and can dominate the photoelectron spectra. Finally, the effects of reduced dimensionality are studied in particular for the d electrons of manganese which are additionally affected by strong correlation effects. Surprising results are also obtained here. ---------------------------- Die Links zur jeweiligen Source der im Appendix beigefügten Veröffentlichungen befinden sich auf Seite 83 des Volltextes.
This professorial dissertation thesis collects several empirical studies on tax distribution and tax reform in Germany. Chapter 2 deals with two studies on effective income taxation, based on representative micro data sets from tax statistics. The first study analyses the effective income taxation at the individual level, in particular with respect to the top incomes. It is based on an integrated micro data file of household survey data and income tax statistics, which captures the entire income distribution up to the very top. Despite substantial tax base erosion and reductions of top tax rates, the German personal income tax has remained effectively progressive. The distribution of the tax burden is highly concentrated and the German economic elite is still taxed relatively heavily, even though the effective tax rate for this group has significantly declined. The second study of Chapter 2 highlights the effective income taxation of functional income sources, such as labor income, business and capital income, etc. Using income tax micro data and microsimulation models, we allocate the individual income tax liability to the respective income sources, according to different apportionment schemes accounting for losses. We find that the choice of the apportionment scheme markedly affects the tax shares of income sources and implicit tax rates, in particular those of capital income. Income types without significant losses such as labor income or transfer incomes show higher tax shares and implicit tax rates if we account for losses. The opposite is true for capital income, in particular for income from renting and leasing. Chapter 3 presents two studies on business taxation, based on representative micro data sets from tax statistics and the microsimulation model BizTax. The first part provides a study on fundamental reform options for the German local business tax. We find that today’s high concentration of local business tax revenues on corporations with high profits decreases if the tax base is broadened by integrating more taxpayers and by including more elements of business value added. The reform scenarios with a broader tax base distribute the local business tax revenue per capita more equally across regional categories. The second study of Chapter 3 discusses the macroeconomic performance of business taxation against the background of corporate income. A comparison of the tax base reported in tax statistics with the macroeconomic corporate income from national accounts gives hints to considerable tax base erosion. The average implicit tax rate on corporate income was around 20 percent since 2001, and thus falling considerably short of statutory tax rates and effective tax rates discussed in the literature. For lack of detailed accounting data it is hard to give precise reasons for the presumptive tax base erosion. Chapter 4 deals with several assessment studies on the ecological tax reform implemented in Germany as of 1999. First, we describe the scientific, ideological, and political background of the ecological tax reform. Further, we present the main findings of a first systematic impact analysis. We employ two macroeconomic models, an econometric input-output model and a recursive-dynamic computable general equilibrium (CGE) model. Both models show that Germany’s ecological tax reform helps to reduce energy consumption and CO2 emissions without having a substantial adverse effect on overall economic growth. It could have a slightly positive effect on employment. The reform’s impact on the business sector and the effects of special provisions granted to agriculture and the goods and materials sectors are outlined in a further study. The special provisions avoid higher tax burdens on the energy-intensive production. However, they widely reduce the marginal tax rates and thus the incentives to energy saving. Though the reform of special provisions 2003 increased the overall tax burden of the energy-intensive industry, the enlarged eligibility for tax rebates neutralizes the ecologic incentives. Based on the Income and Consumption Survey of 2003, we have analyzed the distributional impact of the ecological tax reform. The increased energy taxes show a clear regressive impact relative to disposable income. Families with children face a higher tax burden relative to household income. The reduction of pension contributions and the automatic adjustment of social security transfers widely mitigate this regressive impact. Households with low income or with many children nevertheless bear a slight increase in tax burden. Refunding the eco tax revenue by an eco bonus would make the reform clearly progressive.
Controlling interactions in synthetic polymers as precisely as in proteins would have a strong impact on polymer science. Advanced structural and functional control can lead to rational design of, integrated nano- and microstructures. To achieve this, properties of monomer sequence defined oligopeptides were exploited. Through their incorporation as monodisperse segments into synthetic polymers we learned in recent four years how to program the structure formation of polymers, to adjust and exploit interactions in such polymers, to control inorganic-organic interfaces in fiber composites and induce structure in Biomacromolecules like DNA for biomedical applications.
Ferroelectrets are internally charged polymer foams or cavity-containing polymer-_lm systems that combine large piezoelectricity with mechanical flexibility and elastic compliance. The term “ferroelectret” was coined based on the fact that it is a space-charge electret that also shows ferroic behavior. In this thesis, comprehensive work on ferroelectrets, and in particular on their preparation, their charging, their piezoelectricity and their applications is reported.
For industrial applications, ferroelectrets with well-controlled distributions or even uniform values of cavity size and cavity shape and with good thermal stability of the piezoelectricity are very desirable. Several types of such ferroelectrets are developed using techniques such as straightforward thermal lamination, sandwiching sticky templates with electret films, and screen printing. In particular, uoroethylenepropylene (FEP) _lm systems with tubular-channel openings, prepared by means of the thermal lamination technique, show piezoelectric d33 coefficients of up to 160 pC/N after charging through dielectric barrier discharges (DBDs) . For samples charged at suitable elevated temperatures, the piezoelectricity is stable at temperatures of at least 130°C. These preparation methods are easy to implement at laboratory or industrial scales, and are quite flexible in terms of material selection and cavity geometry design. Due to the uniform and well-controlled cavity structures, samples are also very suitable for fundamental studies on ferroelectrets.
Charging of ferroelectrets is achieved via a series of dielectric barrier discharges (DBDs) inside the cavities. In the present work, the DBD charging process is comprehensively studied by means of optical, electrical and electro-acoustic methods. The spectrum of the transient light from the DBDs in cellular polypropylene (PP) ferroelectrets directly confirms the ionization of molecular nitrogen, and allows the determination of the electric field in the discharge. Detection of the light emission reveals not only DBDs under high applied voltage but also back discharges when the applied voltage is reduced to sufficiently low values. Back discharges are triggered by the internally deposited charges, as the breakdown inside the cavities is controlled by the sum of the applied electric field and the electric field of the deposited charges. The remanent effective polarization is determined by the breakdown strength of the gas-filled cavities. These findings form the basis of more efficient charging techniques for ferroelectrets such as charging with high-pressure air, thermal poling and charging assisted by gas exchange. With the proposed charging strategies, the charging efficiency of ferroelectrets can be enhanced significantly.
After charging, the cavities can be considered as man-made macroscopic dipoles whose direction can be reversed by switching the polarity of the applied voltage. Polarization-versus-electric-field (P(E)) hysteresis loops in ferroelectrets are observed by means of an electro-acoustic method combined with dielectric resonance spectroscopy. P(E) hysteresis loops in ferrroelectrets are also obtained by more direct measurements using a modified Sawyer-Tower circuit. Hysteresis loops prove the ferroic behavior of ferroelectrets. However, repeated switching of the macroscopic dipoles involves complex physico-chemical processes. The DBD charging process generates a cold plasma with numerous active species and thus modifies the inner polymer surfaces of the cavities. Such treatments strongly affect the chargeability of the cavities. At least for cellular PP ferroelectrets, repeated DBDs in atmospheric conditions lead to considerable fatigue of the effective polarization and of the resulting piezoelectricity.
The macroscopic dipoles in ferroelectrets are highly compressible, and hence the piezoelectricity is essentially the primary effect. It is found that the piezoelectric d33 coefficient is proportional to the polarization and the elastic compliance of the sample, providing hints for developing materials with higher piezoelectric sensitivity in the future. Due to their outstanding electromechanical properties, there has been constant interest in the application of ferroelectrets. The antiresonance frequencies (fp) of ferroelectrets are sensitive to the boundary conditions during measurement. A tubular-channel FEP ferroelectret is conformably attached to a self-organized minimum-energy dielectric elastomer actuator (DEA). It turns out that the antiresonance frequency (fp) of the ferroelectret film changes noticeably with the bending angle of the DEA. Therefore, the actuation of DEAs can be used to modulate the fp value of ferroelectrets, but fp can also be exploited for in-situ diagnosis and for precise control of the actuation of the DEA. Combination of DEAs and ferroelectrets opens up various new possibilities for application.
This thesis deals with different aspects of flood risk in Germany. In twelve papers new scientific findings about flood hazards, factors that influence flood losses as well as effective private precautionary measures are presented. The seasonal distribution of flooding is shown for the whole of Germany. Furthermore, possible impacts of climate change on discharge and flood frequencies are estimated for the catchment of the river Rhine. Moreover, it is simulated at reaches of the Lower Rhine, which effects may result from levee breaches. Flood losses are the focus of the second part of the thesis: After the flood in August 2002 approximately 1700 households were interviewed by telephone. By this, it was possible to quantify the influence of different factors such as flood duration or the contamination of the flood water with oil on the extent of financial flood damage. On this basis, a new model was derived, by which flood losses can be calculated on a large scale. On the other hand, it was possible to derive recommendations for the improvement of private precaution. For example, the analysis revealed that insured households were compensated more quickly and to a better degree than uninsured. It became also clear that different groups like tenants and homeowners have different capabilities of performing precaution. This is to be considered in future risk communication. In 2005 and 2006, the rivers Elbe and Danube were again affected by flooding. A renewed pool among households and public authorities enabled us to investigate the improvement of flood risk management and the precaution in the City of Dresden. Several methods and finding of this thesis are applicable for water resources management issues and contribute to an improvement of flood risk analysis and management in Germany.
Intermolekulare Desaktivierung zwischen einem angeregten Fluorophor und einem Löscher durch Elektronenübertragung kann mit dynamischer und statischer Löschung beschrieben werden. Es wird vorgeschlagen den dynamischen Löschprozess in Transport- und Wechselwirkungsphase einzuteilen. Ergebnisse der Löschung der N-Heteroarene durch Naphthalen bei hohen Löscherkonzentrationen werden mit der statischen Löschung beschrieben. Außerdem werden CT-Systeme untersucht. Nach einem Überblick über statische Modelle zum Resonanzenergietransfer wird ein aus der Treffertheorie abgeleitetes Modell vorgestellt und an Beispielen getestet. Die Experimente sind computergesteuert.
The behaviour of an adhering cell is strongly influenced by the chemical, topographical and mechanical properties of the surface it attaches to. During recent years, it has been found experimentally that adhering cells actively sense the elastic properties of their environment by pulling on it through numerous sites of adhesion. The resulting build-up of force at sites of adhesion depends on the elastic properties of the environment and is converted into corresponding biochemical signals, which can trigger cellular programmes like growth, differentiation, apoptosis, and migration. In general, force is an important regulator of biological systems, for example in hearing and touch, in wound healing, and in rolling adhesion of leukocytes on vessel walls. In the habilitation thesis by Ulrich Schwarz, several theoretical projects are presented which address the role of forces and elasticity in cell adhesion. (1) A new method has been developed for calculating cellular forces exerted at sites of focal adhesion on micro-patterned elastic substrates. The main result is that cell-matrix contacts function as mechanosensors, converting internal force into protein aggregation. (2) A one-step master equation for the stochastic dynamics of adhesion clusters as a function of cluster size, rebinding rate and force has been solved both analytically and numerically. Moreover this model has been applied to the regulation of cell-matrix contacts, to dynamic force spectroscopy, and to rolling adhesion. (3) Using linear elasticity theory and the concept of force dipoles, a model has been introduced and solved which predicts the positioning and orientation of mechanically active cells in soft material, in good agreement with experimental observations for fibroblasts on elastic substrates and in collagen gels.
Highly collimated, high velocity streams of hot plasma – the jets – are observed as a general phenomenon being found in a variety of astrophysical objects regarding their size and energy output. Known as jet sources are protostellar objects (T Tauri stars, embedded IR sources), galactic high energy sources ("microquasars"), and active galactic nuclei (extragalactic radio sources and quasars). Within the last two decades our knowledge regarding the processes involved in astro-physical jet formation has condensed in a kind of standard model. This is the scenario of a magnetohydrodynamically accelerated and collimated jet stream launched from the innermost part of an accretion disk close to the central object. Traditionally, the problem of jet formation is divided in two categories. One is the question how to collimate and accelerate an uncollimated low velocity disk wind into a jet. The second is the question how to initiate that outflow from a disk, i.e. how to turn accretion of matter into an ejection as a disk wind. My own work is mainly related to the first question, the collimation and acceleration process. Due to the complexity of both, the physical processes believed to be responsible for the jet launching and also the spatial configuration of the physical components of the jet source, the enigma of jet formation is not yet completely understood. On the theoretical side, there has been a substantial advancement during the last decade from purely station-ary models to time-dependent simulations lead by the vast increase of computer power. Observers, on the other hand, do not yet have the instruments at hand in order to spatially resolve observe the very jet origin. It can be expected that also the next years will yield a substantial improvement on both tracks of astrophysical research. Three-dimensional magnetohydrodynamic simu-lations will improve our understanding regarding the jet-disk interrelation and the time-dependent character of jet formation, the generation of the magnetic field in the jet source, and the interaction of the jet with the ambient medium. Another step will be the combina-tion of radiation transfer computations and magnetohydrodynamic simulations providing a direct link to the observations. At the same time, a new generation of telescopes (VLT, NGST) in combination with new instrumental techniques (IR-interferometry) will lead to a "quantum leap" in jet observation, as the resolution will then be sufficient in order to zoom into the innermost region of jet formation.
One of the classical ways to describe the dynamics of nonlinear systems is to analyze theur Fourier spectra. For periodic and quasiperiodic processes the Fourier spectrum consists purely of discrete delta-functions. On the contrary, the spectrum of a chaotic motion is marked by the presence of the continuous component. In this work, we describe the peculiar, neither regular nor completely chaotic state with so called singular-continuous power spectrum. Our investigations concern various cases from most different fields, where one meets the singular continuous (fractal) spectra. The examples include both the physical processes which can be reduced to iterated discrete mappings or even symbolic sequences, and the processes whose description is based on the ordinary or partial differential equations.
By using mouse outcross populations in combination with bioinformatic approaches, it was possible to identify and characterize novel genes regulating body weight, fat mass and β-cell function, which all contribute to the pathogenesis of obesity and T2D. In detail, the presented studies identified 1. Ifi202b/IFI16 as adipogenic gene involved in adipocyte commitment, maintenance of white adipocyte identity, fat cell size and the inflammatory state of adipose tissue. 2. Pla2g4a/PLA2G4A as gene linked to increased body weight and fat mass with a higher expression in adipose tissue of obese mice and pigs as well as in obese human subjects. 3. Ifgga2/IRGM as novel regulator of lipophagy protecting from excess hepatic lipid accumulation. 4. Nidd/DBA as a diabetogenic locus containing Kti12, Osbpl9, Ttc39a and Calr4 with differential expression in pancreatic islets and/or genetic variants. 5. miR-31 to be higher expressed in adipose tissue of obese and diabetic mice and humans targeting PPARy and GLUT4 and thereby involved in adipogenesis and insulin signaling. 6. Gjb4 as novel gene triggering the development of T2D by reducing insulin secretion, inducing apoptosis and inhibiting proliferation. The performed studies confirmed the complexity and strong genetic heritability character of obesity and T2D. A high number of genetic variations, each with a small effect, are collectively influencing the degree and severity of the disease. The use of mouse outcross populations is a valid tool for disease gene identification; however, to facilitate and accelerate the process of gene identification the combination of mouse cross data with advanced sequencing resources and the publicly available data sets are essential. The main goal for future studies should be the translation of these novel molecular discoveries to useful treatment therapies. More recently, several classes of novel unimolecular combination therapeutics have emerged with superior efficacy than currently prescribed options and pose the potential to reverse obesity and T2D (Finan et al., 2015). The glucagon-like peptide-1 (GLP-1)- estrogen conjugate, which targets estrogen into cells expressing GLP-1 receptors, was shown to improve energy, glucose and lipid metabolism as well as to reduce food reward (Finan et al., 2012; Schwenk et al., 2014; Vogel et al., 2016). Another possibility is the development of miRNA-based therapeutics to prevent obesity and T2D, such as miRNA mimetics, anti-miRNA oligonucleotides and exosomes loaded with miRNAs (Ji and Guo, 2019; Gottmann et al., 2020). As already described, genome-wide association studies for polygenic obesity and T2D traits in humans have also led to the identification of numerous gene variants with modest effect, most of them having an unknown function (Yazdi et al., 2015). These discoveries resulted in novel animal models and have illuminated new biologic pathways. Therefore, the integration of mouse-human genetic approaches and the utilization of the synergistic effects have the potential to lead to the identification of more genes responsible for common Mendelian forms of obesity and T2D, as well as gene × gene and gene × environment interactions (Yazdi et al., 2015; Ingelsson and McCarthy, 2018). This combination may help to unravel the missing heritability of obesity and T2D, to identify novel drug targets and to design more efficient and personalized obesity prevention and management programs.