Institut für Physik und Astronomie
Refine
Year of publication
Document Type
- Article (4128)
- Doctoral Thesis (773)
- Postprint (126)
- Monograph/Edited Volume (120)
- Other (84)
- Review (35)
- Habilitation Thesis (24)
- Preprint (24)
- Master's Thesis (10)
- Conference Proceeding (6)
Is part of the Bibliography
- yes (5333) (remove)
Keywords
- diffusion (58)
- stars: massive (56)
- gamma rays: general (47)
- anomalous diffusion (45)
- stars: early-type (45)
- cosmic rays (43)
- stars: winds, outflows (42)
- Magellanic Clouds (39)
- radiation mechanisms: non-thermal (38)
- X-rays: stars (37)
Institute
- Institut für Physik und Astronomie (5333)
- Extern (61)
- Institut für Chemie (18)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (18)
- Institut für Geowissenschaften (10)
- Mathematisch-Naturwissenschaftliche Fakultät (7)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (7)
- Department Psychologie (6)
- Institut für Biochemie und Biologie (4)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (4)
A detailed description of the characteristics of antimicrobial peptides (AMPs) is highly demanded, since the resistance against traditional antibiotics is an emerging problem in medicine. They are part of the innate immune system in every organism, and they are very efficient in the protection against bacteria, viruses, fungi and even cancer cells. Their advantage is that their target is the cell membrane, in contrast to antibiotics which disturb the metabolism of the respective cell type. This allows AMPs to be more active and faster. The lack of an efficient therapy for some cancer types and the evolvement of resistance against existing antitumor agents make AMPs promising in cancer therapy besides being an alternative to traditional antibiotics. The aim of this work was the physical-chemical characterization of two fragments of LL-37, a human antimicrobial peptide from the cathelicidin family. The fragments LL-32 and LL-20 exhibited contrary behavior in biological experiments concerning their activity against bacterial cells, human cells and human cancer cells. LL-32 had even a higher activity than LL-37, while LL-20 had almost no effect. The interaction of the two fragments with model membranes was systematically studied in this work to understand their mode of action. Planar lipid films were mainly applied as model systems in combination with IR-spectroscopy and X-ray scattering methods. Circular Dichroism spectroscopy in bulk systems completed the results. In the first approach, the structure of the peptides was determined in aqueous solution and compared to the structure of the peptides at the air/water interface. In bulk, both peptides are in an unstructured conformation. Adsorbed and confined to at the air-water interface, the peptides differ drastically in their surface activity as well as in the secondary structure. While LL-32 transforms into an α-helix lying flat at the water surface, LL-20 stays partly unstructured. This is in good agreement with the high antimicrobial activity of LL-32. In the second approach, experiments with lipid monolayers as biomimetic models for the cell membrane were performed. It could be shown that the peptides fluidize condensed monolayers of negatively charged DPPG which can be related to the thinning of a bacterial cell membrane. An interaction of the peptides with zwitterionic PCs, as models for mammalian cells, was not clearly observed, even though LL-32 is haemolytic. In the third approach, the lipid monolayers were more adapted to the composition of human erythrocyte membranes by incorporating sphingomyelin (SM) into the PC monolayers. Physical-chemical properties of the lipid films were determined and the influence of the peptides on them was studied. It could be shown that the interaction of the more active LL-32 is strongly increased for heterogeneous lipid films containing both gel and fluid phases, while the interaction of LL-20 with the monolayers was unaffected. The results indicate an interaction of LL-32 with the membrane in a detergent-like way. Additionally, the modelling of the peptide interaction with cancer cells was performed by incorporating some negatively charged lipids into the PC/SM monolayers, but the increased charge had no effect on the interaction of LL-32. It was concluded, that the high anti-cancer activity of the peptide originates from the changed fluidity of cell membrane rather than from the increased surface charge. Furthermore, similarities to the physical-chemical properties of melittin, an AMP from the bee venom, were demonstrated.
Crowded field spectroscopy and the search for intermediate-mass black holes in globular clusters
(2013)
Globular clusters are dense and massive star clusters that are an integral part of any major galaxy. Careful studies of their stars, a single cluster may contain several millions of them, have revealed that the ages of many globular clusters are comparable to the age of the Universe. These remarkable ages make them valuable probes for the exploration of structure formation in the early universe or the assembly of our own galaxy, the Milky Way. A topic of current research relates to the question whether globular clusters harbour massive black holes in their centres. These black holes would bridge the gap from stellar mass black holes, that represent the final stage in the evolution of massive stars, to supermassive ones that reside in the centres of galaxies. For this reason, they are referred to as intermediate-mass black holes. The most reliable method to detect and to weigh a black hole is to study the motion of stars inside its sphere of influence. The measurement of Doppler shifts via spectroscopy allows one to carry out such dynamical studies. However, spectroscopic observations in dense stellar fields such as Galactic globular clusters are challenging. As a consequence of diffraction processes in the atmosphere and the finite resolution of a telescope, observed stars have a finite width characterized by the point spread function (PSF), hence they appear blended in crowded stellar fields. Classical spectroscopy does not preserve any spatial information, therefore it is impossible to separate the spectra of blended stars and to measure their velocities. Yet methods have been developed to perform imaging spectroscopy. One of those methods is integral field spectroscopy. In the course of this work, the first systematic study on the potential of integral field spectroscopy in the analysis of dense stellar fields is carried out. To this aim, a method is developed to reconstruct the PSF from the observed data and to use this information to extract the stellar spectra. Based on dedicated simulations, predictions are made on the number of stellar spectra that can be extracted from a given data set and the quality of those spectra. Furthermore, the influence of uncertainties in the recovered PSF on the extracted spectra are quantified. The results clearly show that compared to traditional approaches, this method makes a significantly larger number of stars accessible to a spectroscopic analysis. This systematic study goes hand in hand with the development of a software package to automatize the individual steps of the data analysis. It is applied to data of three Galactic globular clusters, M3, M13, and M92. The data have been observed with the PMAS integral field spectrograph at the Calar Alto observatory with the aim to constrain the presence of intermediate-mass black holes in the centres of the clusters. The application of the new analysis method yields samples of about 80 stars per cluster. These are by far the largest spectroscopic samples that have so far been obtained in the centre of any of the three clusters. In the course of the further analysis, Jeans models are calculated for each cluster that predict the velocity dispersion based on an assumed mass distribution inside the cluster. The comparison to the observed velocities of the stars shows that in none of the three clusters, a massive black hole is required to explain the observed kinematics. Instead, the observations rule out any black hole in M13 with a mass higher than 13000 solar masses at the 99.7% level. For the other two clusters, this limit is at significantly lower masses, namely 2500 solar masses in M3 and 2000 solar masses in M92. In M92, it is possible to lower this limit even further by a combined analysis of the extracted stars and the unresolved stellar component. This component consists of the numerous stars in the cluster that appear unresolved in the integral field data. The final limit of 1300 solar masses is the lowest limit obtained so far for a massive globular cluster.
Multi-messenger constraints and pressure from dark matter annihilation into electron-positron pairs
(2013)
Despite striking evidence for the existence of dark matter from astrophysical observations, dark matter has still escaped any direct or indirect detection until today. Therefore a proof for its existence and the revelation of its nature belongs to one of the most intriguing challenges of nowadays cosmology and particle physics. The present work tries to investigate the nature of dark matter through indirect signatures from dark matter annihilation into electron-positron pairs in two different ways, pressure from dark matter annihilation and multi-messenger constraints on the dark matter annihilation cross-section. We focus on dark matter annihilation into electron-positron pairs and adopt a model-independent approach, where all the electrons and positrons are injected with the same initial energy E_0 ~ m_dm*c^2. The propagation of these particles is determined by solving the diffusion-loss equation, considering inverse Compton scattering, synchrotron radiation, Coulomb collisions, bremsstrahlung, and ionization. The first part of this work, focusing on pressure from dark matter annihilation, demonstrates that dark matter annihilation into electron-positron pairs may affect the observed rotation curve by a significant amount. The injection rate of this calculation is constrained by INTEGRAL, Fermi, and H.E.S.S. data. The pressure of the relativistic electron-positron gas is computed from the energy spectrum predicted by the diffusion-loss equation. For values of the gas density and magnetic field that are representative of the Milky Way, it is estimated that the pressure gradients are strong enough to balance gravity in the central parts if E_0 < 1 GeV. The exact value depends somewhat on the astrophysical parameters, and it changes dramatically with the slope of the dark matter density profile. For very steep slopes, as those expected from adiabatic contraction, the rotation curves of spiral galaxies would be affected on kiloparsec scales for most values of E_0. By comparing the predicted rotation curves with observations of dwarf and low surface brightness galaxies, we show that the pressure from dark matter annihilation may improve the agreement between theory and observations in some cases, but it also imposes severe constraints on the model parameters (most notably, the inner slope of the halo density profile, as well as the mass and the annihilation cross-section of dark matter particles into electron-positron pairs). In the second part, upper limits on the dark matter annihilation cross-section into electron-positron pairs are obtained by combining observed data at different wavelengths (from Haslam, WMAP, and Fermi all-sky intensity maps) with recent measurements of the electron and positron spectra in the solar neighbourhood by PAMELA, Fermi, and H.E.S.S.. We consider synchrotron emission in the radio and microwave bands, as well as inverse Compton scattering and final-state radiation at gamma-ray energies. For most values of the model parameters, the tightest constraints are imposed by the local positron spectrum and synchrotron emission from the central regions of the Galaxy. According to our results, the annihilation cross-section should not be higher than the canonical value for a thermal relic if the mass of the dark matter candidate is smaller than a few GeV. In addition, we also derive a stringent upper limit on the inner logarithmic slope α of the density profile of the Milky Way dark matter halo (α < 1 if m_dm < 5 GeV, α < 1.3 if m_dm < 100 GeV and α < 1.5 if m_dm < 2 TeV) assuming a dark matter annihilation cross-section into electron-positron pairs (σv) = 3*10^−26 cm^3 s^−1, as predicted for thermal relics from the big bang.
Unter geeigneten Wachstumsbedingungen weisen Algenkulturen oft eine größere Produktivität der Zellen auf, als sie bei höheren Pflanzen zu beobachten ist. Chlamydomonas reinhardtii-Zellen sind vergleichsweise klein. So beträgt das Zellvolumen während des vegetativen Zellzyklus etwa 50–3500 µm³. Im Vergleich zu höheren Pflanzen ist in einer Algensuspension die Konzentration der Biomasse allerdings gering. So enthält beispielsweise 1 ml einer üblichen Konzentration zwischen 10E6 und 10E7 Algenzellen. Quantifizierungen von Metaboliten oder Makromolekülen, die zur Modellierung von zellulären Prozessen genutzt werden, werden meist im Zellensemble vorgenommen. Tatsächlich unterliegt jedoch jede Algenzelle einer individuellen Entwicklung, die die Identifizierung charakteristischer allgemeingültiger Systemparameter erschwert. Ziel dieser Arbeit war es, biochemisch relevante Messgrößen in-vivo und in-vitro mit Hilfe optischer Verfahren zu identifizieren und zu quantifizieren. Im ersten Teil der Arbeit wurde ein Puls-Amplituden-Modulation(PAM)-Fluorimetriemessplatz zur Messung der durch äußere Einflüsse bedingten veränderlichen Chlorophyllfluoreszenz an einzelnen Zellen vorgestellt. Die Verwendung eines kommerziellen Mikroskops, die Implementierung empfindlicher Nachweiselektronik und einer geeignete Immobilisierungsmethode ermöglichten es, ein Signal-zu-Rauschverhältnis zu erreichen, mit dem Fluoreszenzsignale einzelner lebender Chlamydomonas-Zellen gemessen werden konnten. Insbesondere wurden das Zellvolumen und der als Maß für die Effizienz des Photosyntheseapparats bzw. die Zellfitness geltende Chlorophyllfluoreszenzparameter Fv/Fm ermittelt und ein hohes Maß an Heterogenität dieser zellulären Parameter in verschiedenen Entwicklungsstadien der synchronisierten Chlamydomonas-Zellen festgestellt. Im zweiten Teil der Arbeit wurden die bildgebende Laser-Scanning-Mikroskopie und anschließende Bilddatenanalyse zur quantitativen Erfassung der wachstumsabhängigen zellulären Parameter angewandt. Ein kommerzielles konfokales Mikroskop wurde um die Möglichkeit der nichtlinearen Mikroskopie erweitert. Diese hat den Vorteil einer lokalisierten Anregung, damit verbunden einer höheren Ortsauflösung und insgesamt geringeren Probenbelastung. Weiterhin besteht neben der Signalgewinnung durch Fluoreszenzanregung die Möglichkeit der Erzeugung der Zweiten Harmonischen (SHG) an biophotonischen Strukturen, wie der zellulären Stärke. Anhand der Verteilungsfunktionen war es möglich mit Hilfe von modelltheoretischen Ansätzen zelluläre Parameter zu ermitteln, die messtechnisch nicht unmittelbar zugänglich sind. Die morphologischen Informationen der Bilddaten ermöglichten die Bestimmung der Zellvolumina und die Volumina subzellularer Strukturen, wie Nuclei, extranucleäre DNA oder Stärkegranula. Weiterhin konnte die Anzahl subzellulärer Strukturen innerhalb einer Zelle bzw. eines Zellverbunds ermittelt werden. Die Analyse der in den Bilddaten enthaltenen Signalintensitäten war Grundlage einer relativen Konzentrationsbestimmung von zellulären Komponenten, wie DNA bzw. Stärke. Mit dem hier vorgestellten Verfahren der nichtlinearen Mikroskopie und nachfolgender Bilddatenanalyse konnte erstmalig die Verteilung des zellulären Stärkegehalts in einer Chlamydomonas-Population während des Wachstums bzw. nach induziertem Stärkeabbau verfolgt werden. Im weiteren Verlauf wurde diese Methode auch auf Gefrierschnitte höherer Pflanzen, wie Arabidopsis thaliana, angewendet. Im Ergebnis wurde gezeigt, dass viele zelluläre Parameter, wie das Volumen, der zelluläre DNA- und Stärkegehalt bzw. die Anzahl der Stärkegranula durch eine Lognormalverteilung, mit wachstumsabhängiger Parametrisierung, beschrieben werden. Zelluläre Parameter, wie Stoffkonzentration und zelluläres Volumen, zeigen keine signifikanten Korrelationen zueinander, woraus geschlussfolgert werden muss, dass es ein hohes Maß an Heterogenität der zellulären Parameter innerhalb der synchronisierten Chlamydomonas-Populationen gibt. Diese Aussage gilt sowohl für die als homogenste Form geltenden Synchronkulturen von Chlamydomonas reinhardtii als auch für die gemessenen zellulären Parameter im intakten Zellverbund höherer Pflanzen. Dieses Ergebnis ist insbesondere für modelltheoretische Betrachtungen von Relevanz, die sich auf empirische Daten bzw. zelluläre Parameter stützen welche im Zellensemble gemessen wurden und somit nicht notwendigerweise den zellulären Status einer einzelnen Zelle repräsentieren.
Famously, Einstein read off the geometry of spacetime from Maxwell's equations. Today, we take this geometry that serious that our fundamental theory of matter, the standard model of particle physics, is based on it. However, it seems that there is a gap in our understanding if it comes to the physics outside of the solar system. Independent surveys show that we need concepts like dark matter and dark energy to make our models fit with the observations. But these concepts do not fit in the standard model of particle physics. To overcome this problem, at least, we have to be open to matter fields with kinematics and dynamics beyond the standard model. But these matter fields might then very well correspond to different spacetime geometries. This is the basis of this thesis: it studies the underlying spacetime geometries and ventures into the quantization of those matter fields independently of any background geometry. In the first part of this thesis, conditions are identified that a general tensorial geometry must fulfill to serve as a viable spacetime structure. Kinematics of massless and massive point particles on such geometries are introduced and the physical implications are investigated. Additionally, field equations for massive matter fields are constructed like for example a modified Dirac equation. In the second part, a background independent formulation of quantum field theory, the general boundary formulation, is reviewed. The general boundary formulation is then applied to the Unruh effect as a testing ground and first attempts are made to quantize massive matter fields on tensorial spacetimes.
The Sun is surrounded by a 10^6 K hot atmosphere, the corona. The corona and the solar wind are fully ionized, and therefore in the plasma state. Magnetic fields play an important role in a plasma, since they bind electrically charged particles to their field lines. EUV spectroscopes, like the SUMER instrument on-board the SOHO spacecraft, reveal a preferred heating of coronal ions and strong temperature anisotropies. Velocity distributions of electrons can be measured directly in the solar wind, e.g. with the 3DPlasma instrument on-board the WIND satellite. They show a thermal core, an anisotropic suprathermal halo, and an anti-solar, magnetic-field-aligned, beam or "strahl". For an understanding of the physical processes in the corona, an adequate description of the plasma is needed. Magnetohydrodynamics (MHD) treats the plasma simply as an electrically conductive fluid. Multi-fluid models consider e.g. protons and electrons as separate fluids. They enable a description of many macroscopic plasma processes. However, fluid models are based on the assumption of a plasma near thermodynamic equilibrium. But the solar corona is far away from this. Furthermore, fluid models cannot describe processes like the interaction with electromagnetic waves on a microscopic scale. Kinetic models, which are based on particle velocity distributions, do not show these limitations, and are therefore well-suited for an explanation of the observations listed above. For the simplest kinetic models, the mirror force in the interplanetary magnetic field focuses solar wind electrons into an extremely narrow beam, which is contradicted by observations. Therefore, a scattering mechanism must exist that counteracts the mirror force. In this thesis, a kinetic model for electrons in the solar corona and wind is presented that provides electron scattering by resonant interaction with whistler waves. The kinetic model reproduces the observed components of solar wind electron distributions, i.e. core, halo, and a "strahl" with finite width. But the model is not only applicable on the quiet Sun. The propagation of energetic electrons from a solar flare is studied, and it is found that scattering in the direction of propagation and energy diffusion influence the arrival times of flare electrons at Earth approximately to the same degree. In the corona, the interaction of electrons with whistler waves does not only lead to scattering, but also to the formation of a suprathermal halo, as it is observed in interplanetary space. This effect is studied both for the solar wind as well as the closed volume of a coronal magnetic loop. The result is of fundamental importance for solar-stellar relations. The quiet solar corona always produces suprathermal electrons. This process is closely related to coronal heating, and can therefore be expected in any hot stellar corona. In the second part of this thesis it is detailed how to calculate growth or damping rates of plasma waves from electron velocity distributions. The emission and propagation of electron cyclotron waves in the quiet solar corona, and that of whistler waves during solar flares, is studied. The latter can be observed as so-called fiber bursts in dynamic radio spectra, and the results are in good agreement with observed bursts.
Nucleation and growth of unsubstituted metal phthalocyanine films from solution on planar substrates
(2012)
In den vergangenen Jahren wurden kosteneffiziente nasschemische Beschichtungsverfahren für die Herstellung organischer Dünnfilme für verschiedene opto-elektronische Anwendungen entdeckt und weiterentwickelt. Unter anderem wurden Phthalocyanin-Moleküle in photoaktiven Schichten für die Herstellung von Solarzellen intensiv erforscht. Aufgrund der kleinen bzw. unbekannten Löslichkeit wurden Phthalocyanin-Schichten durch Aufdampfverfahren im Vakuum hergestellt. Des Weiteren wurde die Löslichkeit durch chemische Synthese erhöht, was aber die Eigenschaften von Pc beeinträchtigte. In dieser Arbeit wurde die Löslichkeit, optische Absorption und Stabilität von 8 verschiedenen unsubstituierten Metall-Phthalocyaninen in 28 verschiedenen Lösungsmitteln quantitativ gemessen. Wegen ausreichender Löslichkeit, Stabilität und Anwendbarkeit in organischen Solarzellen wurde Kupferphthalocyanin (CuPc) in Trifluoressigsäure (TFA) für weitere Untersuchungen ausgewählt. Durch die Rotationsbeschichtung von CuPc aus TFA Lösung wurde ein dünner Film aus der verdampfenden Lösung auf dem Substrat platziert. Nach dem Verdampfen des Lösungsmittels, die Nanobändern aus CuPc bedecken das Substrat. Die Nanobänder haben eine Dicke von etwa ~ 1 nm (typische Dimension eines CuPc-Molekül) und variierender Breite und Länge, je nach Menge des Materials. Solche Nanobändern können durch Rotationsbeschichtung oder auch durch andere Nassbeschichtungsverfahren, wie Tauchbeschichtung, erzeugt werden. Ähnliche Fibrillen-Strukturen entstehen durch Nassbeschichtung von anderen Metall-Phthalocyaninen, wie Eisen- und Magnesium-Phthalocyanin, aus TFA-Lösung sowie auf anderen Substraten, wie Glas oder Indium Zinnoxid. Materialeigenschaften von aufgebrachten CuPc aus TFA Lösung und CuPc in der Lösung wurden ausführlich mit Röntgenbeugung, Spektroskopie- und Mikroskopie Methoden untersucht. Es wird gezeigt, dass die Nanobänder nicht in der Lösung, sondern durch Verdampfen des Lösungsmittels und der Übersättigung der Lösung entstehen. Die Rasterkraftmikroskopie wurde dazu verwendet, um die Morphologie des getrockneten Films bei unterschiedlicher Konzentration zu studieren. Der Mechanismus der Entstehung der Nanobändern wurde im Detail studiert. Gemäß der Keimbildung und Wachstumstheorie wurde die Entstehung der CuPc Nanobänder aus einer übersättigt Lösung diskutiert. Die Form der Nanobändern wurde unter Berücksichtigung der Wechselwirkung zwischen den Molekülen und dem Substrat diskutiert. Die nassverarbeitete CuPc-Dünnschicht wurde als Donorschicht in organischen Doppelschicht Solarzellen mit C60-Molekül, als Akzeptor eingesetzt. Die Effizienz der Energieumwandlung einer solchen Zelle wurde entsprechend den Schichtdicken der CuPc Schicht untersucht.
This thesis rests on two large Active Galactic Nuclei (AGNs) surveys. The first survey deals with galaxies that host low-level AGNs (LLAGN) and aims at identifying such galaxies by quantifying their variability. While numerous studies have shown that AGNs can be variable at all wavelengths, the nature of the variability is still not well understood. Studying the properties of LLAGNs may help to understand better galaxy evolution, and how AGNs transit between active and inactive states. In this thesis, we develop a method to extract variability properties of AGNs. Using multi-epoch deep photometric observations, we subtract the contribution of the host galaxy at each epoch to extract variability and estimate AGN accretion rates. This pipeline will be a powerful tool in connection with future deep surveys such as PANSTARS. The second study in this thesis describes a survey of X-ray selected AGN hosts at redshifts z>1.5 and compares them to quiescent galaxies. This survey aims at studying environments, sizes and morphologies of star-forming high-redshift AGN hosts in the COSMOS Survey at the epoch of peak AGN activity. Between redshifts 1.5<z<3.8, the COSMOS HST/ACS imaging probes the UV regime, where separating the AGN flux from its host galaxy is very challenging. Nevertheless, we successfully derived the structural properties of 249 AGN hosts using two-dimensional surface-brightness profile fitting with the GALFIT package. This is the largest sample of AGN hosts at redshift z>1.5 to date. We analyzed the evolution of structural parameters of AGN and non-AGN host galaxies with redshift, and compared their disturbance rates to identify the more probable AGN triggering mechanism in the 43.5<log_10 L_X<45 luminosity range. We also conducted mock AGN and quiescent galaxies observations to determine errors and corrections for the derived parameters. We find that the size-absolute magnitude relations of AGN hosts and non-AGN galaxies are very similar, with estimated mean sizes in both samples decreasing by ~50% between redshifts z=1.5 and z=3.5. Morphological classification of both active and quiescent galaxies shows that the majority of the AGN host galaxies are disc-dominated, with disturbance rates that are significantly lower than among the non-AGN galaxies. Such a finding suggests that Major Mergers are probably not responsible for triggering AGN accretion in most of these galaxies. Other secular mechanisms should therefore be responsible.
In dieser Arbeit werden die Effekte der Synchronisation nichtlinearer, akustischer Oszillatoren am Beispiel zweier Orgelpfeifen untersucht. Aus vorhandenen, experimentellen Messdaten werden die typischen Merkmale der Synchronisation extrahiert und dargestellt. Es folgt eine detaillierte Analyse der Übergangsbereiche in das Synchronisationsplateau, der Phänomene während der Synchronisation, als auch das Austreten aus der Synchronisationsregion beider Orgelpfeifen, bei verschiedenen Kopplungsstärken. Die experimentellen Befunde werfen Fragestellungen nach der Kopplungsfunktion auf. Dazu wird die Tonentstehung in einer Orgelpfeife untersucht. Mit Hilfe von numerischen Simulationen der Tonentstehung wird der Frage nachgegangen, welche fluiddynamischen und aero-akustischen Ursachen die Tonentstehung in der Orgelpfeife hat und inwiefern sich die Mechanismen auf das Modell eines selbsterregten akustischen Oszillators abbilden lässt. Mit der Methode des Coarse Graining wird ein Modellansatz formuliert.
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
Actin is one of the most abundant and highly conserved proteins in eukaryotic cells. The globular protein assembles into long filaments, which form a variety of different networks within the cytoskeleton. The dynamic reorganization of these networks - which is pivotal for cell motility, cell adhesion, and cell division - is based on cycles of polymerization (assembly) and depolymerization (disassembly) of actin filaments. Actin binds ATP and within the filament, actin-bound ATP is hydrolyzed into ADP on a time scale of a few minutes. As ADP-actin dissociates faster from the filament ends than ATP-actin, the filament becomes less stable as it grows older. Recent single filament experiments, where abrupt dynamical changes during filament depolymerization have been observed, suggest the opposite behavior, however, namely that the actin filaments become increasingly stable with time. Several mechanisms for this stabilization have been proposed, ranging from structural transitions of the whole filament to surface attachment of the filament ends. The key issue of this thesis is to elucidate the unexpected interruptions of depolymerization by a combination of experimental and theoretical studies. In new depolymerization experiments on single filaments, we confirm that filaments cease to shrink in an abrupt manner and determine the time from the initiation of depolymerization until the occurrence of the first interruption. This duration differs from filament to filament and represents a stochastic variable. We consider various hypothetical mechanisms that may cause the observed interruptions. These mechanisms cannot be distinguished directly, but they give rise to distinct distributions of the time until the first interruption, which we compute by modeling the underlying stochastic processes. A comparison with the measured distribution reveals that the sudden truncation of the shrinkage process neither arises from blocking of the ends nor from a collective transition of the whole filament. Instead, we predict a local transition process occurring at random sites within the filament. The combination of additional experimental findings and our theoretical approach confirms the notion of a local transition mechanism and identifies the transition as the photo-induced formation of an actin dimer within the filaments. Unlabeled actin filaments do not exhibit pauses, which implies that, in vivo, older filaments become destabilized by ATP hydrolysis. This destabilization can be identified with an acceleration of the depolymerization prior to the interruption. In the final part of this thesis, we theoretically analyze this acceleration to infer the mechanism of ATP hydrolysis. We show that the rate of ATP hydrolysis is constant within the filament, corresponding to a random as opposed to a vectorial hydrolysis mechanism.
This work investigates diffusion in nonlinear Hamiltonian systems. The diffusion, more precisely subdiffusion, in such systems is induced by the intrinsic chaotic behavior of trajectories and thus is called chaotic diffusion''. Its properties are studied on the example of one- or two-dimensional lattices of harmonic or nonlinear oscillators with nearest neighbor couplings. The fundamental observation is the spreading of energy for localized initial conditions. Methods of quantifying this spreading behavior are presented, including a new quantity called excitation time. This new quantity allows for a more precise analysis of the spreading than traditional methods. Furthermore, the nonlinear diffusion equation is introduced as a phenomenologic description of the spreading process and a number of predictions on the density dependence of the spreading are drawn from this equation. Two mathematical techniques for analyzing nonlinear Hamiltonian systems are introduced. The first one is based on a scaling analysis of the Hamiltonian equations and the results are related to similar scaling properties of the NDE. From this relation, exact spreading predictions are deduced. Secondly, the microscopic dynamics at the edge of spreading states are thoroughly analyzed, which again suggests a scaling behavior that can be related to the NDE. Such a microscopic treatment of chaotically spreading states in nonlinear Hamiltonian systems has not been done before and the results present a new technique of connecting microscopic dynamics with macroscopic descriptions like the nonlinear diffusion equation. All theoretical results are supported by heavy numerical simulations, partly obtained on one of Europe's fastest supercomputers located in Bologna, Italy. In the end, the highly interesting case of harmonic oscillators with random frequencies and nonlinear coupling is studied, which resembles to some extent the famous Discrete Anderson Nonlinear Schroedinger Equation. For this model, a deviation from the widely believed power-law spreading is observed in numerical experiments. Some ideas on a theoretical explanation for this deviation are presented, but a conclusive theory could not be found due to the complicated phase space structure in this case. Nevertheless, it is hoped that the techniques and results presented in this work will help to eventually understand this controversely discussed case as well.
We investigate properties of quantum mechanical systems in the light of quantum information theory. We put an emphasize on systems with infinite-dimensional Hilbert spaces, so-called continuous-variable systems'', which are needed to describe quantum optics beyond the single photon regime and other Bosonic quantum systems. We present methods to obtain a description of such systems from a series of measurements in an efficient manner and demonstrate the performance in realistic situations by means of numerical simulations. We consider both unconditional quantum state tomography, which is applicable to arbitrary systems, and tomography of matrix product states. The latter allows for the tomography of many-body systems because the necessary number of measurements scales merely polynomially with the particle number, compared to an exponential scaling in the generic case. We also present a method to realize such a tomography scheme for a system of ultra-cold atoms in optical lattices. Furthermore, we discuss in detail the possibilities and limitations of using continuous-variable systems for measurement-based quantum computing. We will see that the distinction between Gaussian and non-Gaussian quantum states and measurements plays an crucial role. We also provide an algorithm to solve the large and interesting class of naturally occurring Hamiltonians, namely frustration free ones, efficiently and use this insight to obtain a simple approximation method for slightly frustrated systems. To achieve this goals, we make use of, among various other techniques, the well developed theory of matrix product states, tensor networks, semi-definite programming, and matrix analysis.
The microscopic origin of ultrafast demagnetization, i.e. the quenching of the magnetization of a ferromagnetic metal on a sub-picosecond timescale after laser excitation, is still only incompletely understood, despite a large body of experimental and theoretical work performed since the discovery of the effect more than 15 years ago. Time- and element-resolved x-ray magnetic circular dichroism measurements can provide insight into the microscopic processes behind ultrafast demagnetization as well as its dependence on materials properties. Using the BESSY II Femtoslicing facility, a storage ring based source of 100 fs short soft x-ray pulses, ultrafast magnetization dynamics of ferromagnetic NiFe and GdTb alloys as well as a Au/Ni layered structure were investigated in laser pump – x-ray probe experiments. After laser excitation, the constituents of Ni50Fe50 and Ni80Fe20 exhibit distinctly different time constants of demagnetization, leading to decoupled dynamics, despite the strong exchange interaction that couples the Ni and Fe sublattices under equilibrium conditions. Furthermore, the time constants of demagnetization for Ni and Fe are different in Ni50Fe50 and Ni80Fe20, and also different from the values for the respective pure elements. These variations are explained by taking the magnetic moments of the Ni and Fe sublattices, which are changed from the pure element values due to alloying, as well as the strength of the intersublattice exchange interaction into account. GdTb exhibits demagnetization in two steps, typical for rare earths. The time constant of the second, slower magnetization decay was previously linked to the strength of spin-lattice coupling in pure Gd and Tb, with the stronger, direct spin-lattice coupling in Tb leading to a faster demagnetization. In GdTb, the demagnetization of Gd follows Tb on all timescales. This is due to the opening of an additional channel for the dissipation of spin angular momentum to the lattice, since Gd magnetic moments in the alloy are coupled via indirect exchange interaction to neighboring Tb magnetic moments, which are in turn strongly coupled to the lattice. Time-resolved measurements of the ultrafast demagnetization of a Ni layer buried under a Au cap layer, thick enough to absorb nearly all of the incident pump laser light, showed a somewhat slower but still sub-picosecond demagnetization of the buried Ni layer in Au/Ni compared to a Ni reference sample. Supported by simulations, I conclude that demagnetization can thus be induced by transport of hot electrons excited in the Au layer into the Ni layer, without the need for direct interaction between photons and spins.
Structural dynamics of photoexcited nanolayered perovskites studied by ultrafast x-ray diffraction
(2012)
This publication-based thesis represents a contribution to the active research field of ultrafast structural dynamics in laser-excited nanostructures. The investigation of such dynamics is mandatory for the understanding of the various physical processes on microscopic scales in complex materials which have great potentials for advances in many technological applications. I theoretically and experimentally examine the coherent, incoherent and anharmonic lattice dynamics of epitaxial metal-insulator heterostructures on timescales ranging from femtoseconds up to nanoseconds. To infer information on the transient dynamics in the photoexcited crystal lattices experimental techniques using ultrashort optical and x-ray pulses are employed. The experimental setups include table-top sources as well as large-scale facilities such as synchrotron sources. At the core of my work lies the development of a linear-chain model to simulate and analyze the photoexcited atomic-scale dynamics. The calculated strain fields are then used to simulate the optical and x-ray response of the considered thin films and multilayers in order to relate the experimental signatures to particular structural processes. This way one obtains insight into the rich lattice dynamics exhibiting coherent transport of vibrational energy from local excitations via delocalized phonon modes of the samples. The complex deformations in tailored multilayers are identified to give rise to highly nonlinear x-ray diffraction responses due to transient interference effects. The understanding of such effects and the ability to precisely calculate those are exploited for the design of novel ultrafast x-ray optics. In particular, I present several Phonon Bragg Switch concepts to efficiently generate ultrashort x-ray pulses for time-resolved structural investigations. By extension of the numerical models to include incoherent phonon propagation and anharmonic lattice potentials I present a new view on the fundamental research topics of nanoscale thermal transport and anharmonic phonon-phonon interactions such as nonlinear sound propagation and phonon damping. The former issue is exemplified by the time-resolved heat conduction from thin SrRuO3 films into a SrTiO3 substrate which exhibits an unexpectedly slow heat conductivity. Furthermore, I discuss various experiments which can be well reproduced by the versatile numerical models and thus evidence strong lattice anharmonicities in the perovskite oxide SrTiO3. The thesis also presents several advances of experimental techniques such as time-resolved phonon spectroscopy with optical and x-ray photons as well as concepts for the implementation of x-ray diffraction setups at standard synchrotron beamlines with largely improved time-resolution for investigations of ultrafast structural processes. This work forms the basis for ongoing research topics in complex oxide materials including electronic correlations and phase transitions related to the elastic, magnetic and polarization degrees of freedom.
In the western hemisphere, the piano is one of the most important instruments. While its evolution lasted for more than three centuries, and the most important physical aspects have already been investigated, some parts in the characterization of the piano remain not well understood. Considering the pivotal piano soundboard, the effect of ribs mounted on the board exerted on the sound radiation and propagation in particular, is mostly neglected in the literature. The present investigation deals exactly with the sound wave propagation effects that emerge in the presence of an array of equally-distant mounted ribs at a soundboard. Solid-state theory proposes particular eigenmodes and -frequencies for such arrangements, which are comparable to single units in a crystal. Following this 'linear chain model' (LCM), differences in the frequency spectrum are observable as a distinct band structure. Also, the amplitudes of the modes are changed, due to differences of the damping factor. These scattering effects were not only investigated for a well-understood conceptional rectangular soundboard (multichord), but also for a genuine piano resonance board manufactured by the piano maker company 'C. Bechstein Pianofortefabrik'. To obtain the possibility to distinguish between the characterizing spectra both with and without mounted ribs, the typical assembly plan for the Bechstein instrument was specially customized. Spectral similarities and differences between both boards are found in terms of damping and tone. Furthermore, specially prepared minimal-invasive piezoelectric polymer sensors made from polyvinylidene fluoride (PVDF) were used to record solid-state vibrations of the investigated system. The essential calibration and characterization of these polymer sensors was performed by determining the electromechanical conversion, which is represented by the piezoelectric coefficient. Therefore, the robust 'sinusoidally varying external force' method was applied, where a dynamic force perpendicular to the sensor's surface, generates movable charge carriers. Crucial parameters were monitored, with the frequency response function as the most important one for acousticians. Along with conventional condenser microphones, the sound was measured as solid-state vibration as well as airborne wave. On this basis, statements can be made about emergence, propagation, and also the overall radiation of the generated modes of the vibrating system. Ultimately, these results acoustically characterize the entire system.
Theory of mRNA degradation
(2012)
One of the central themes of biology is to understand how individual cells achieve a high fidelity in gene expression. Each cell needs to ensure accurate protein levels for its proper functioning and its capability to proliferate. Therefore, complex regulatory mechanisms have evolved in order to render the expression of each gene dependent on the expression level of (all) other genes. Regulation can occur at different stages within the framework of the central dogma of molecular biology. One very effective and relatively direct mechanism concerns the regulation of the stability of mRNAs. All organisms have evolved diverse and powerful mechanisms to achieve this. In order to better comprehend the regulation in living cells, biochemists have studied specific degradation mechanisms in detail. In addition to that, modern high-throughput techniques allow to obtain quantitative data on a global scale by parallel analysis of the decay patterns of many different mRNAs from different genes. In previous studies, the interpretation of these mRNA decay experiments relied on a simple theoretical description based on an exponential decay. However, this does not account for the complexity of the responsible mechanisms and, as a consequence, the exponential decay is often not in agreement with the experimental decay patterns. We have developed an improved and more general theory of mRNA degradation which provides a general framework of mRNA expression and allows describing specific degradation mechanisms. We have made an attempt to provide detailed models for the regulation in different organisms. In the yeast S. cerevisiae, different degradation pathways are known to compete and furthermore most of them rely on the biochemical modification of mRNA molecules. In bacteria such as E. coli, degradation proceeds primarily endonucleolytically, i.e. it is governed by the initial cleavage within the coding region. In addition, it is often coupled to the level of maturity and the size of the polysome of an mRNA. Both for S. cerevisiae and E. coli, our descriptions lead to a considerable improvement of the interpretation of experimental data. The general outcome is that the degradation of mRNA must be described by an age-dependent degradation rate, which can be interpreted as a consequence of molecular aging of mRNAs. Within our theory, we find adequate ways to address this much debated topic from a theoretical perspective. The improvements of the understanding of mRNA degradation can be readily applied to further comprehend the mRNA expression under different internal or environmental conditions such as after the induction of transcription or stress application. Also, the role of mRNA decay can be assessed in the context of translation and protein synthesis. The ultimate goal in understanding gene regulation mediated by mRNA stability will be to identify the relevance and biological function of different mechanisms. Once more quantitative data will become available, our description allows to elaborate the role of each mechanism by devising a suitable model.
Thermal and quantum fluctuations of the electromagnetic near field of atoms and macroscopic bodies play a key role in quantum electrodynamics (QED), as in the Lamb shift. They lead, e.g., to atomic level shifts, dispersion interactions (Van der Waals-Casimir-Polder interactions), and state broadening (Purcell effect) because the field is subject to boundary conditions. Such effects can be observed with high precision on the mesoscopic scale which can be accessed in micro-electro-mechanical systems (MEMS) and solid-state-based magnetic microtraps for cold atoms (‘atom chips’). A quantum field theory of atoms (molecules) and photons is adapted to nonequilibrium situations. Atoms and photons are described as fully quantized while macroscopic bodies can be included in terms of classical reflection amplitudes, similar to the scattering approach of cavity QED. The formalism is applied to the study of nonequilibrium two-body potentials. We then investigate the impact of the material properties of metals on the electromagnetic surface noise, with applications to atomic trapping in atom-chip setups and quantum computing, and on the magnetic dipole contribution to the Van der Waals-Casimir-Polder potential in and out of thermal equilibrium. In both cases, the particular properties of superconductors are of high interest. Surface-mode contributions, which dominate the near-field fluctuations, are discussed in the context of the (partial) dynamic atomic dressing after a rapid change of a system parameter and in the Casimir interaction between two conducting plates, where nonequilibrium configurations can give rise to repulsion.
In the course of this thesis gold nanoparticle/polyelectrolyte multilayer structures were prepared, characterized, and investigated according to their static and ultrafast optical properties. Using the dip-coating or spin-coating layer-by-layer deposition method, gold-nanoparticle layers were embedded in a polyelectrolyte environment with high structural perfection. Typical structures exhibit four repetition units, each consisting of one gold-particle layer and ten double layers of polyelectrolyte (cationic+anionic polyelectrolyte). The structures were characterized by X-ray reflectivity measurements, which reveal Bragg peaks up to the seventh order, evidencing the high stratication of the particle layers. In the same measurements pronounced Kiessig fringes were observed, which indicate a low global roughness of the samples. Atomic force microscopy (AFM) images veried this low roughness, which results from the high smoothing capabilities of polyelectrolyte layers. This smoothing effect facilitates the fabrication of stratified nanoparticle/polyelectrolyte multilayer structures, which were nicely illustrated in a transmission electron microscopy image. The samples' optical properties were investigated by static spectroscopic measurements in the visible and UV range. The measurements revealed a frequency shift of the reflectance and of the plasmon absorption band, depending on the thickness of the polyelectrolyte layers that cover a nanoparticle layer. When the covering layer becomes thicker than the particle interaction range, the absorption spectrum becomes independent of the polymer thickness. However, the reflectance spectrum continues shifting to lower frequencies (even for large thicknesses). The range of plasmon interaction was determined to be in the order of the particle diameter for 10 nm, 20 nm, and 150 nm particles. The transient broadband complex dielectric function of a multilayer structure was determined experimentally by ultrafast pump-probe spectroscopy. This was achieved by simultaneous measurements of the changes in the reflectance and transmittance of the excited sample over a broad spectral range. The changes in the real and imaginary parts of the dielectric function were directly deduced from the measured data by using a recursive formalism based on the Fresnel equations. This method can be applied to a broad range of nanoparticle systems where experimental data on the transient dielectric response are rare. This complete experimental approach serves as a test ground for modeling the dielectric function of a nanoparticle compound structure upon laser excitation.
Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamics
(2012)
Which tensor fields G on a smooth manifold M can serve as a spacetime structure? In the first part of this thesis, it is found that only a severely restricted class of tensor fields can provide classical spacetime geometries, namely those that can carry predictive, interpretable and quantizable matter dynamics. The obvious dependence of this characterization of admissible tensorial spacetime geometries on specific matter is not a weakness, but rather presents an insight: it was Maxwell theory that justified Einstein to promote Lorentzian manifolds to the status of a spacetime geometry. Any matter that does not mimick the structure of Maxwell theory, will force us to choose another geometry on which the matter dynamics of interest are predictive, interpretable and quantizable. These three physical conditions on matter impose three corresponding algebraic conditions on the totally symmetric contravariant coefficient tensor field P that determines the principal symbol of the matter field equations in terms of the geometric tensor G: the tensor field P must be hyperbolic, time-orientable and energy-distinguishing. Remarkably, these physically necessary conditions on the geometry are mathematically already sufficient to realize all kinematical constructions familiar from Lorentzian geometry, for precisely the same structural reasons. This we were able to show employing a subtle interplay of convex analysis, the theory of partial differential equations and real algebraic geometry. In the second part of this thesis, we then explore general properties of any hyperbolic, time-orientable and energy-distinguishing tensorial geometry. Physically most important are the construction of freely falling non-rotating laboratories, the appearance of admissible modified dispersion relations to particular observers, and the identification of a mechanism that explains why massive particles that are faster than some massless particles can radiate off energy until they are slower than all massless particles in any hyperbolic, time-orientable and energy-distinguishing geometry. In the third part of the thesis, we explore how tensorial spacetime geometries fare when one wants to quantize particles and fields on them. This study is motivated, in part, in order to provide the tools to calculate the rate at which superluminal particles radiate off energy to become infraluminal, as explained above. Remarkably, it is again the three geometric conditions of hyperbolicity, time-orientability and energy-distinguishability that allow the quantization of general linear electrodynamics on an area metric spacetime and the quantization of massive point particles obeying any admissible dispersion relation. We explore the issue of field equations of all possible derivative order in rather systematic fashion, and prove a practically most useful theorem that determines Dirac algebras allowing the reduction of derivative orders. The final part of the thesis presents the sketch of a truly remarkable result that was obtained building on the work of the present thesis. Particularly based on the subtle duality maps between momenta and velocities in general tensorial spacetimes, it could be shown that gravitational dynamics for hyperbolic, time-orientable and energy distinguishable geometries need not be postulated, but the formidable physical problem of their construction can be reduced to a mere mathematical task: the solution of a system of homogeneous linear partial differential equations. This far-reaching physical result on modified gravity theories is a direct, but difficult to derive, outcome of the findings in the present thesis. Throughout the thesis, the abstract theory is illustrated through instructive examples.
Particles in Saturn’s main rings range in size from dust to even kilometer-sized objects. Their size distribution is thought to be a result of competing accretion and fragmentation processes. While growth is naturally limited in tidal environments, frequent collisions among these objects may contribute to both accretion and fragmentation. As ring particles are primarily made of water ice attractive surface forces like adhesion could significantly influence these processes, finally determining the resulting size distribution. Here, we derive analytic expressions for the specific self-energy Q and related specific break-up energy Q⋆ of aggregates. These expressions can be used for any aggregate type composed of monomeric constituents. We compare these expressions to numerical experiments where we create aggregates of various types including: regular packings like the face-centered cubic (fcc), Ballistic Particle Cluster Aggregates (BPCA), and modified BPCAs including e.g. different constituent size distributions. We show that accounting for attractive surface forces such as adhesion a simple approach is able to: a) generally account for the size dependence of the specific break-up energy for fragmentation to occur reported in the literature, namely the division into “strength” and “gravity” regimes, and b) estimate the maximum aggregate size in a collisional ensemble to be on the order of a few meters, consistent with the maximum aggregate size observed in Saturn’s rings of about 10m.
The inspiral and merger of two black holes is among the most exciting and extreme events in our universe. Being one of the loudest sources of gravitational waves, they provide a unique dynamical probe of strong-field general relativity and a fertile ground for the observation of fundamental physics. While the detection of gravitational waves alone will allow us to observe our universe through an entirely new window, combining the information obtained from both gravitational wave and electro-magnetic observations will allow us to gain even greater insight in some of the most exciting astrophysical phenomena. In addition, binary black-hole mergers serve as an intriguing tool to study the geometry of space-time itself. In this dissertation we study the merger process of binary black-holes in a variety of conditions. Our results show that asymmetries in the curvature distribution on the common apparent horizon are correlated to the linear momentum acquired by the merger remnant. We propose useful tools for the analysis of black holes in the dynamical and isolated horizon frameworks and shed light on how the final merger of apparent horizons proceeds after a common horizon has already formed. We connect mathematical theorems with data obtained from numerical simulations and provide a first glimpse on the behavior of these surfaces in situations not accessible to analytical tools. We study electro-magnetic counterparts of super-massive binary black-hole mergers with fully 3D general relativistic simulations of binary black-holes immersed both in a uniform magnetic field in vacuum and in a tenuous plasma. We find that while a direct detection of merger signatures with current electro-magnetic telescopes is unlikely, secondary emission, either by altering the accretion rate of the circumbinary disk or by synchrotron radiation from accelerated charges, may be detectable. We propose a novel approach to measure the electro-magnetic radiation in these simulations and find a non-collimated emission that dominates over the collimated one appearing in the form of dual jets associated with each of the black holes. Finally, we provide an optimized gravitational wave detection pipeline using phenomenological waveforms for signals from compact binary coalescence and show that by including spin effects in the waveform templates, the detection efficiency is drastically improved as well as the bias on recovered source parameters reduced. On the whole, this disseration provides evidence that a multi-messenger approach to binary black-hole merger observations provides an exciting prospect to understand these sources and, ultimately, our universe.
One of the most exciting predictions of Einstein's theory of gravitation that have not yet been proven experimentally by a direct detection are gravitational waves. These are tiny distortions of the spacetime itself, and a world-wide effort to directly measure them for the first time with a network of large-scale laser interferometers is currently ongoing and expected to provide positive results within this decade. One potential source of measurable gravitational waves is the inspiral and merger of two compact objects, such as binary black holes. Successfully finding their signature in the noise-dominated data of the detectors crucially relies on accurate predictions of what we are looking for. In this thesis, we present a detailed study of how the most complete waveform templates can be constructed by combining the results from (A) analytical expansions within the post-Newtonian framework and (B) numerical simulations of the full relativistic dynamics. We analyze various strategies to construct complete hybrid waveforms that consist of a post-Newtonian inspiral part matched to numerical-relativity data. We elaborate on exsisting approaches for nonspinning systems by extending the accessible parameter space and introducing an alternative scheme based in the Fourier domain. Our methods can now be readily applied to multiple spherical-harmonic modes and precessing systems. In addition to that, we analyze in detail the accuracy of hybrid waveforms with the goal to quantify how numerous sources of error in the approximation techniques affect the application of such templates in real gravitational-wave searches. This is of major importance for the future construction of improved models, but also for the correct interpretation of gravitational-wave observations that are made utilizing any complete waveform family. In particular, we comprehensively discuss how long the numerical-relativity contribution to the signal has to be in order to make the resulting hybrids accurate enough, and for currently feasible simulation lengths we assess the physics one can potentially do with template-based searches.
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
Eumelanin ist ein Fluorophor mit teilweise recht ungewöhnlichen spektralen Eigenschaften. Unter anderem konnten in früheren Veröffentlichungen Unterschiede zwischen dem 1- und 2-photonen-angeregtem Fluoreszenzspektrum beobachtet werden, weshalb im nichtlinearen Anregungsfall ein schrittweiser Anregungsprozess vermutet wurde. Um diese und weitere optische Eigenschaften des Eumelanins besser zu verstehen, wurden in der vorliegenden Arbeit vielfältige messmethodische Ansätze der linearen und nichtlinearen Optik an synthetischem Eumelanin in 0,1M NaOH verfolgt. Aus den Ergebnissen wurde ein Modell abgeleitet, welches die beobachteten photonischen Eigenschaften konsistent beschreibt. In diesem kaskadierten Zustandsmodell (Kaskaden-Modell) wird die aufgenommene Photonenenergie schrittweise von Anregungszuständen hoher Übergangsenergien zu Anregungszuständen niedrigerer Übergangsenergien transferiert. Messungen der transienten Absorption ergaben dominante Anteile mit kurzen Lebensdauern im ps-Bereich und ließen damit auf eine hohe Relaxationsgeschwindigkeit entlang der Kaskade schließen. Durch Untersuchung der nichtlinear angeregten Fluoreszenz von verschieden großen Eumelanin-Aggregaten konnte gezeigt werden, dass Unterschiede zwischen dem linear und nichtlinear angeregten Fluoreszenzspektrum nicht nur durch einen schrittweisen Anregungsprozess bei nichtlinearer Anregung sondern auch durch Unterschiede in den Verhältnissen der Quantenausbeuten zwischen kleinen und großen Aggregaten beim Wechsel von linearer zu nichtlinearer Anregung begründet sein können. Durch Bestimmung des Anregungswirkungsquerschnitts und der Anregungspulsdauer-Abhängigkeit der nichtlinear angeregten Fluoreszenz von Eumelanin konnte jedoch ein schrittweiser 2-Photonen-Anregungsprozess über einen Zwischenzustand mit Lebendsdauern im ps-Bereich nachgewiesen werden.
A key non-destructive technique for analysis, optimization and developing of new functional materials such as sensors, transducers, electro-optical and memory devices is presented. The Thermal-Pulse Tomography (TPT) provides high-resolution three-dimensional images of electric field and polarization distribution in a material. This thermal technique use a pulsed heating by means of focused laser light which is absorbed by opaque electrodes. The diffusion of the heat causes changes in the sample geometry, generating a short-circuit current or change in surface potential, which contains information about the spatial distribution of electric dipoles or space charges. Afterwards, a reconstruction of the internal electric field and polarization distribution in the material is possible via Scale Transformation or Regularization methods. In this way, the TPT was used for the first time to image the inhomogeneous ferroelectric switching in polymer ferroelectric films (candidates to memory devices). The results shows the typical pinning of electric dipoles in the ferroelectric polymer under study and support the previous hypotheses of a ferroelectric reversal at a grain level via nucleation and growth. In order to obtain more information about the impact of the lateral and depth resolution of the thermal techniques, the TPT and its counterpart called Focused Laser Intensity Modulation Method (FLIMM) were implemented in ferroelectric films with grid-shaped electrodes. The results from both techniques, after the data analysis with different regularization and scale methods, are in total agreement. It was also revealed a possible overestimated lateral resolution of the FLIMM and highlights the TPT method as the most efficient and reliable thermal technique. After an improvement in the optics, the Thermal-Pulse Tomography method was implemented in polymer-dispersed liquid crystals (PDLCs) films, which are used in electro-optical applications. The results indicated a possible electrostatic interaction between the COH group in the liquid crystals and the fluorinate atoms of the used ferroelectric matrix. The geometrical parameters of the LC droplets were partially reproduced as they were compared with Scanning Electron Microscopy (SEM) images. For further applications, it is suggested the use of a non-strong-ferroelectric polymer matrix. In an effort to develop new polymerferroelectrets and for optimizing their properties, new multilayer systems were inspected. The results of the TPT method showed the non-uniformity of the internal electric-field distribution in the shaped-macrodipoles and thus suggested the instability of the sample. Further investigation on multilayers ferroelectrets was suggested and the implementation of less conductive polymers layers too.
This thesis contains several theoretical studies on optomechanical systems, i.e. physical devices where mechanical degrees of freedom are coupled with optical cavity modes. This optomechanical interaction, mediated by radiation pressure, can be exploited for cooling and controlling mechanical resonators in a quantum regime. The goal of this thesis is to propose several new ideas for preparing meso- scopic mechanical systems (of the order of 10^15 atoms) into highly non-classical states. In particular we have shown new methods for preparing optomechani-cal pure states, squeezed states and entangled states. At the same time, proce-dures for experimentally detecting these quantum effects have been proposed. In particular, a quantitative measure of non classicality has been defined in terms of the negativity of phase space quasi-distributions. An operational al- gorithm for experimentally estimating the non-classicality of quantum states has been proposed and successfully applied in a quantum optics experiment. The research has been performed with relatively advanced mathematical tools related to differential equations with periodic coefficients, classical and quantum Bochner’s theorems and semidefinite programming. Nevertheless the physics of the problems and the experimental feasibility of the results have been the main priorities.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
Ein neuentwickeltes azobenzenhaltiges Material, das auf einem supramolekularen Konzept basiert, wird bezüglich seiner Strukturbildung während einer holografischen Belichtung bei 488 nm untersucht. Im Mittelpunkt stehen dabei eindimensionale, sinusförmige Reliefs mit Periodizitäten kleiner 500 nm. Es wird gezeigt, wie der Grad der Vernetzung der photosensitiven Schicht die Strukturbildung in diesem Größenbereich beeinflusst. Zur Maximierung der Strukturtiefe werden gezielt Prozessparameter der Belichtung sowie Materialparameter variiert. Unter Standardbedingungen und moderaten Belichtungsintensitäten von ca. 200 mW/cm² bilden sich innerhalb weniger Minuten bei einer Periode von 400 nm Strukturtiefen von bis zu 80nm aus. Durch die Beeinflussung von Materialparametern, wie Oberflächenspannung und Viskosität, wird die maximale Strukturtiefe auf 160nm verdoppelt. Durch Mehrfachbelichtungen wird auch die Bildung von zweidimensionalen Gittern untersucht. Die Originalstrukturen werden in einem Abformverfahren kopiert und in Schichten von unter UV-Licht aushärtenden Polymeren übertragen. Durch das Abformen kommt es zu einer geringfügigen Verschlechterung der Oberflächenqualität sowie Abnahme der Strukturtiefe. Dieser Verlust wird durch eine Verringerung der Prozesstemperatur verringert. Mithilfe kopierter Oberflächengitter werden organische Distributed Feedback-(DFB)-Laser zweiter Ordnung hergestellt, um den Einfluss von Gitterparametern auf die Emissionseigenschaften dieser Laser zu untersuchen. Dazu erfolgt zunächst die Charakterisierung der optischen Verstärkungseigenschaften ausgewählter organischer Emittermaterialien mittels der Variablen Strichlängenmethode. Das mit dem Laserfarbstoff Pyrromthen567 (PM567) dotierte Polystyrol (PS) zeigt dabei trotz konzentrationsbedingter geringer Absorption eine vergleichsweise geringe Gewinnschwelle von 50µJ/cm² bei ca. 575 nm. Das aktive Gast-Wirt-System der konjugierten Polymere MEH-PPV und F8BT* weist eine hohe Absorption und eine kleine Gewinnschwelle von 2,5 µJ/cm² bei 630 nm auf. Dieses Verhalten spiegelt sich auch in den Emissionseigenschaften der damit hergestellten DFB-Laser wieder. Die Dicke der aktiven Schichten liegen im Bereich hunderter Nanometer und wird so eingestellt, dass sich nur die transversalen Grundmoden im Wellenleiter ausbreiten können. Die Gitterperiode sind so gewählt, dass ein Lichtmode im Verstärkungsbereich des Emittermaterials liegt. Die Emissionslinien der Laser sind mit FWHM-Werten von bis zu 0,3 nm spektral sehr schmalbandig und weisen auf eine sehr gute Gitterqualität hin. Die Untersuchungen liefern minimale Laserschwellen und maximale differentielle Effizienzen von 4,0µJ/cm² und 8,4% für MEH-PPV in F8BT* (bei ca. 640nm) sowie 80 µJ/cm² und 0,9% für PM567 in PS (bei ca. 575 nm). Die Vergrößerung der Strukturtiefe von 40nm auf 80nm in mit MEH-PPV dotierten F8BT*-Lasern zu einem deutlichen Anstieg der ausgekoppelten Energie sowie der differentiellen Effizienz und einem geringen Absinken der Laserschwelle. Dies ist ein Resultat der erhöhten Kopplung von Lasermode und Gitter. Die Emission von DFB-Lasern mit zweidimensionalen Oberflächengittern zeigen eine Verringerung der Divergenz aber kein Einfluss auf die Laserschwelle. Abschließend erfolgt eine Vermessung der Photostabilität von DFB-Lasern unter verschiedenen Bedingungen. Das Einbringen eines konjugierten Polymers in eine aktive Matrix sowie der Betrieb in einer Stickstoffatmosphäre führen dabei zu einer Erhöhung der Lebensdauer auf über eine Million Pulse. Durch die Kombination von Oberflächengittern in PDMS-Filmen mit elektroaktiven Substraten wird eine elektrisch steuerbare Deformation des Beugungsgitters erreicht und auf einen DFB-Laser übertragen. Die spannungsinduzierte Verformung wird zunächst in Beugungsexperimenten charakterisiert und ein optimaler Arbeitspunkt bestimmt. Mit den beiden Elastomeren SEBS12 und VHB4910 werden in den Gittern maximale Periodenänderungen von 1,3% bzw. 3,4% bei einer Steuerspannung von 2 kV erreicht. Der Unterschied resultiert aus den verschiedenen Elastizitätsmoduln der Materialien. Übertragen auf DFB-Laser resultiert eine Variation der Gitterperiode senkrecht zu den Gitterlinien in einer kontinuierlichen Verschiebung der Emissionswellenlänge. Mit einem Spannungssignal von 3,25 kV wird die schmalbandige Emission eines elastischen DFB-Lasers kontinuierlich um fast 50nm von 604 nm zu 557 nm hin verschoben. Aus dem Deformationsverhalten sowohl der reinen Beugungsgitter als auch der Laser werden Rückschlüsse auf die Elastizität der verwendeten Materialien gezogen und erlauben Verbesserungen der Bauteile.
Mathematik spielt im Physikunterricht eine nicht unerhebliche Rolle - wenn auch eine zwiespältige. Oft wird sie sogar zum Hindernis beim Lernen von Physik und kann ihr emanzipatorisches Potenzial nicht entfalten. Die vorliegende Arbeit stellt zwei Bausteine für eine begründete Konzeption zum Umgang mit Mathematik beim Lernen von Physik zur Verfügung. Im Theorieteil der Arbeit werden zum Einen wissenschaftstheoretische Aspekte der Rolle der Mathematik in der Physik aufgearbeitet und der physikdidaktischen Forschungsgemeinschaft im Zusammenhang zugänglich gemacht. Zum anderen werden Forschungsergebnisse zu Vorstellungen Lernender über Physik und Mathematik sowie im Bereich der Epistemologie zusammengestellt. Im empirischen Teil der Arbeit werden Vorstellungen zur Rolle der Mathematik in der Physik von Schülerinnen und Schülern der Klassenstufen 10 und 12 sowie Physik-Lehramtstudierenden im Grundstudium mit Hilfe eines Fragebogens erhoben und unter Verwendung inhaltsanalytischer bzw. statistischer Methoden ausgewertet. Die Ergebnisse zeigen unter Anderem, dass Mathematik im Physikunterricht entgegen gängiger Meinungen bei den Lernenden nicht negativ, aber zumindest bei jüngeren Lernenden formal und algorithmisch konnotiert ist.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
Um Extremereignisse in der Dynamik des indischen Sommermonsuns (ISM) in der geologischen Vergangenheit zu identifizieren, schlage ich einen neuartigen Ansatz basierend auf der Quantifikation von Fluktuationen in einem nichtlinearen Ähnlichkeitsmaß vor. Dieser reagiert empfindlich auf Zeitabschnitte mit deutlichen Veränderungen in der dynamischen Komplexität kurzer Zeitreihen. Ein mathematischer Zusammenhang zwischen dem neuen Maß und dynamischen Invarianten des zugrundeliegenden Systems wie fraktalen Dimensionen und Lyapunovexponenten wird analytisch hergeleitet. Weiterhin entwickle ich einen statistischen Test zur Schätzung der Signifikanz der so identifizierten dynamischen Übergänge. Die Stärken der Methode werden durch die Aufdeckung von Bifurkationsstrukturen in paradigmatischen Modellsystemen nachgewiesen, wobei im Vergleich zu den traditionellen Lyapunovexponenten eine Identifikation komplexerer dynamischer Übergänge möglich ist. Wir wenden die neu entwickelte Methode zur Analyse realer Messdaten an, um ausgeprägte dynamische Veränderungen auf Zeitskalen von Jahrtausenden in Klimaproxydaten des südasiatischen Sommermonsunsystems während des Pleistozäns aufzuspüren. Dabei zeigt sich, dass viele dieser Übergänge durch den externen Einfluss der veränderlichen Sonneneinstrahlung, sowie durch dem Klimasystem interne Einflussfaktoren auf das Monsunsystem (Eiszeitzyklen der nördlichen Hemisphäre und Einsatz der tropischenWalkerzirkulation) induziert werden. Trotz seiner Anwendbarkeit auf allgemeine Zeitreihen ist der diskutierte Ansatz besonders zur Untersuchung von kurzen Paläoklimazeitreihen geeignet. Die während des ISM über dem indischen Subkontinent fallenden Niederschläge treten, bedingt durch die zugrundeliegende Dynamik der atmosphärischen Zirkulation und topographische Einflüsse, in äußerst komplexen, raumzeitlichen Mustern auf. Ich stelle eine detaillierte Analyse der Sommermonsunniederschläge über der indischen Halbinsel vor, die auf Ereignissynchronisation (ES) beruht, einem Maß für die nichtlineare Korrelation von Punktprozessen wie Niederschlagsereignissen. Mit hierarchischen Clusteringalgorithmen identifiziere ich zunächst Regionen mit besonders kohärenten oder homogenen Monsunniederschlägen. Dabei können auch die Zeitverzögerungsmuster von Regenereignissen rekonstruiert werden. Darüber hinaus führe ich weitere Analysen auf Basis der Theorie komplexer Netzwerke durch. Diese Studien ermöglichen wertvolle Einsichten in räumliche Organisation, Skalen und Strukturen von starken Niederschlagsereignissen oberhalb der 90% und 94% Perzentilen während des ISM (Juni bis September). Weiterhin untersuche ich den Einfluss von verschiedenen, kritischen synoptischen Systemen der Atmosphäre sowie der steilen Topographie des Himalayas auf diese Niederschlagsmuster. Die vorgestellte Methode ist nicht nur geeignet, die Struktur extremer Niederschlagsereignisse zu visualisieren, sondern kann darüber hinaus über der Region atmosphärische Transportwege von Wasserdampf und Feuchtigkeitssenken auf dekadischen Skalen identifizieren.Weiterhin wird ein einfaches, auf komplexen Netzwerken basierendes Verfahren zur Entschlüsselung der räumlichen Feinstruktur und Zeitentwicklung von Monsunniederschlagsextremen während der vergangenen 60 Jahre vorgestellt.
Actin-based directional motility is important for embryonic development, wound healing, immune responses, and development of tissues. Actin and myosin are essential players in this process that can be subdivided into protrusion, adhesion, and traction. Protrusion is the forward movement of the membrane at the leading edge of the cell. Adhesion is required to enable movement along a substrate, and traction finally leads to the forward movement of the entire cell body, including its organelles. While actin polymerization is the main driving force in cell protrusions, myosin motors lead to the contraction of the cell body. The goal of this work was to study the regulatory mechanisms of the motile machinery by selecting a representative key player for each stage of the signaling process: the regulation of Arp2/3 activity by WASP (actin system), the role of cGMP in myosin II assembly (myosin system), and the influence of phosphoinositide signaling (upstream receptor pathway). The model organism chosen for this work was the social ameba Dictyostelium discoideum, due to the well-established knowledge of its cytoskeletal machinery, the easy handling, and the high motility of its vegetative and starvation developed cells. First, I focused on the dynamics of the actin cytoskeleton by modulating the activity of one of its key players, the Arp2/3 complex. This was achieved using the carbazole derivative Wiskostatin, an inhibitor of the Arp2/3 activator WASP. Cells treated with Wiskostatin adopted a round shape, with no of few pseudopodia. With the help of a microfluidic cell squeezer device, I could show that Wiskostatin treated cells display a reduced mechanical stability, comparable to cells treated with the actin disrupting agent Latrunculin A. Furthermore, the WASP inhibited cells adhere stronger to a surface and show a reduced motility and chemotactic performance. However, the overall F-actin content in the cells was not changed. Confocal microscopy and TIRF microscopy imaging showed that the cells maintained an intact actin cortex. Localized dynamic patches of increased actin polymerization were observed that, however, did not lead to membrane deformation. This indicated that the mechanisms of actin-driven force generation were impaired in Wiskostatin treated cells. It is concluded that in these cells, an altered architecture of the cortical network leads to a reduced overall stiffness of the cell, which is insufficient to support the force generation required for membrane deformation and pseudopod formation. Second, the role of cGMP in myosin II dynamics was investigated. Cyclic GMP is known to regulate the association of myosin II with the cytoskeleton. In Dictyostelium, intracellular cGMP levels increase when cells are exposed to chemoattractants, but also in response to osmotic stress. To study the influence of cyclic GMP on actin and myosin II dynamics, I used the laser-induced photoactivation of a DMACM-caged-Br-cGMP to locally release cGMP inside the cell. My results show that cGMP directly activates the myosin II machinery, but is also able to induce an actin response independently of cAMP receptor activation and signaling. The actin response was observed in both vegetative and developed cells. Possible explanations include cGMP-induced actin polymerization through VASP (vasodilator-stimulated phosphoprotein) or through binding of cGMP to cyclic nucleotide-dependent kinases. Finally, I investigated the role of phosphoinositide signaling using the Polyphosphoinositide-Binding Peptide (PBP10) that binds preferentially to PIP2. Phosphoinositides can recruit actin-binding proteins to defined subcellular sites and alter their activity. Neutrophils, as well as developed Dictyostelium cells produce PIP3 in the plasma membrane at their leading edge in response to an external chemotactic gradient. Although not essential for chemotaxis, phosphoinositides are proposed to act as an internal compass in the cell. When treated with the peptide PBP10, cells became round, with fewer or no pseudopods. PH-CRAC translocation to the membrane still occurs, even at low cAMP stimuli, but cell motility (random and directional) was reduced. My data revealed that the decrease in the pool of available PIP2 in the cell is sufficient to impair cell motility, but enough PIP2 remains so that PIP3 is formed in response to chemoattractant stimuli. My data thus highlights how sensitive cell motility and morphology are to changes in the phosphoinositide signaling. In summary, I have analyzed representative regulatory mechanisms that govern key parts of the motile machinery and characterized their impact on cellular properties including mechanical stability, adhesion and chemotaxis.
Organic thin film transistors (TFT) are an attractive option for low cost electronic applications and may be used for active matrix displays and for RFID applications. To extend the range of applications there is a need to develop and optimise the performance of non-volatile memory devices that are compatible with the solution-processing fabrication procedures used in plastic electronics. A possible candidate is an organic TFT incorporating the ferroelectric co-polymer poly(vinylidenefluoride-trifluoroethylene)(P(VDF-TrFE)) as the gate insulator. Dielectric measurements have been carried out on all-organic metal-insulator-semiconductor structures with the ferroelectric polymer poly(vinylidenefluoride-trifluoroethylene) (P(VDF-TrFE)) as the gate insu-lator. The capacitance spectra of MIS devices, were measured under different biases, showing the effect of charge accumulation and depletion on the Maxwell-Wagner peak. The position and height of this peak clearly indicates the lack of stable depletion behavior and the decrease of mobility when increasing the depletion zone width, i.e. upon moving into the P3HT bulk. The lack of stable depletion was further investigated with capacitance-voltage (C-V) measurements. When the structure was driven into depletion, C-V plots showed a positive flat-band voltage shift, arising from the change in polarization state of the ferroelectric insulator. When biased into accumulation, the polarization was reversed. It is shown that the two polarization states are stable i.e. no depolarization occurs below the coercive field. However, negative charge trapped at the semiconductor-insulator interface during the depletion cycle masks the negative shift in flat-band voltage expected during the sweep to accumulation voltages. The measured output characteristics of the studied ferroelectric-field-effect transistors confirmed the results of the C-V plots. Furthermore, the results indicated a trapping of electrons at the positively charged surfaces of the ferroelectrically polarized P(VDF-TrFE) crystallites near the insulator/semiconductor in-terface during the first poling cycles. The study of the MIS structure by means of thermally stimulated current (TSC) revealed further evidence for the stability of the polarization under depletion voltages. It was shown, that the lack of stable depletion behavior is caused by the compensation of the orientational polarization by fixed electrons at the interface and not by the depolarization of the insulator, as proposed in several publications. The above results suggest a performance improvement of non-volatile memory devices by the optimization of the interface.
Die vorliegende Arbeit versammelt zwei einleitende Kapitel und zehn Essays, die sich als kritisch-konstruktive Beiträge zu einem "erlebenden Verstehen" (Buck) von Physik lesen lassen. Die traditionelle Anlage von Schulphysik zielt auf eine systematische Darstellung naturwissenschaftlichen Wissens, das dann an ausgewählten Beispielen angewendet wird: Schulexperimente beweisen die Aussagen der Systematik (oder machen sie wenigstens plausibel), ausgewählte Phänomene werden erklärt. In einem solchen Rahmen besteht jedoch leicht die Gefahr, den Bezug zur Lebenswirklichkeit oder den Interessen der Schüler zu verlieren. Diese Problematik ist seit mindestens 90 Jahren bekannt, didaktische Antworten - untersuchendes Lernen, Kontextualisierung, Schülerexperimente etc. - adressieren allerdings eher Symptome als Ursachen. Naturwissenschaft wird dadurch spannend, dass sie ein spezifisch investigatives Weltverhältnis stiftet: man müsste gleichsam nicht Wissen, sondern "Fragen lernen" (und natürlich auch, wie Antworten gefunden werden...). Doch wie kann dergleichen auf dem Niveau von Schulphysik aussehen, was für einen theoretischen Rahmen kann es hier geben? In den gesammelten Arbeiten wird einigen dieser Spuren nachgegangen: Der Absage an das zu modellhafte Denken in der phänomenologischen Optik, der Abgrenzung formal-mathematischen Denkens gegen wirklichkeitsnähere Formen naturwissenschaftlicher Denkbewegungen und Evidenz, dem Potential alternativer Interpretationen von "Physikunterricht", der Frage nach dem "Verstehen" u.a. Dabei werden nicht nur Bezüge zum modernen bildungstheoretischen Paradigma der Kompetenz sichtbar, sondern es wird auch versucht, eine ganze Reihe konkrete (schul-)physikalische Beispiele dafür zu geben, was passiert, wenn nicht schon gewusste Antworten Thema werden, sondern Expeditionen, die sich der physischen Welt widmen: Die Schlüsselbegriffe des Fachs, die Methoden der Datenerhebung und Interpretation, die Such- und Denkbewegungen kommen dabei auf eine Weise zur Sprache, die sich nicht auf die Fachsystematik abstützen möchte, sondern diese motivieren, konturieren und verständlich machen will.
The Casimir-Polder interaction between a single neutral atom and a nearby surface, arising from the (quantum and thermal) fluctuations of the electromagnetic field, is a cornerstone of cavity quantum electrodynamics (cQED), and theoretically well established. Recently, Bose-Einstein condensates (BECs) of ultracold atoms have been used to test the predictions of cQED. The purpose of the present thesis is to upgrade single-atom cQED with the many-body theory needed to describe trapped atomic BECs. Tools and methods are developed in a second-quantized picture that treats atom and photon fields on the same footing. We formulate a diagrammatic expansion using correlation functions for both the electromagnetic field and the atomic system. The formalism is applied to investigate, for BECs trapped near surfaces, dispersion interactions of the van der Waals-Casimir-Polder type, and the Bosonic stimulation in spontaneous decay of excited atomic states. We also discuss a phononic Casimir effect, which arises from the quantum fluctuations in an interacting BEC.
Active Galactic Nuclei (AGN) are powered by gas accretion onto supermassive Black Holes (BH). The luminosity of AGN can exceed the integrated luminosity of their host galaxies by orders of magnitude, which are then classified as Quasi-Stellar Objects (QSOs). Some mechanisms are needed to trigger the nuclear activity in galaxies and to feed the nuclei with gas. Among several possibilities, such as gravitational interactions, bar instabilities, and smooth gas accretion from the environment, the dominant process has yet to be identified. Feedback from AGN may be important an important ingredient of the evolution of galaxies. However, the details of this coupling between AGN and their host galaxies remain unclear. In this work we aim to investigate the connection between the AGN and their host galaxies by studying the properties of the extendend ionised gas around AGN. Our study is based on observations of ~50 luminous, low-redshift (z<0.3) QSOs using the novel technique of integral field spectroscopy that combines imaging and spectroscopy. After spatially separating the emission of AGN-ionised gas from HII regions, ionised solely by recently formed massive stars, we demonstrate that the specific star formation rates in several disc-dominated AGN hosts are consistent with those of normal star forming galaxies, while others display no detectable star formation activity. Whether the star formation has been actively suppressed in those particular host galaxies by the AGN, or their gas content is intrinsically low, remains an open question. By studying the kinematics of the ionised gas, we find evidence for non-gravitational motions and outflows on kpc scales only in a few objects. The gas kinematics in the majority of objects however indicate a gravitational origin. It suggests that the importance of AGN feedback may have been overrated in theoretical works, at least at low redshifts. The [OIII] line is the strongest optical emission line for AGN-ionised gas, which can be extended over several kpc scales, usually called the Narrow-Line Region (NLR). We perform a systematic investigation of the NLR size and determine a NLR size-luminosity relation that is consistent with the scenario of a constant ionisation parameter throughout the NLR. We show that previous narrow-band imaging with the Hubble Space Telescope underestimated the NLR size by a factor of >2 and that the continuum AGN luminosity is better correlated with the NLR size than the [OIII] luminosity. These affects may account for the different NLR size-luminosity relations reported in previous studies. On the other hand, we do not detect extended NLRs around all QSOs, and demonstrate that the detection of extended NLRs goes along with radio emission. We employ emission line ratios as a diagnostic for the abundance of heavy elements in the gas, i.e. its metallicity, and find that the radial metallicity gradients are always flatter than in inactive disc-dominated galaxies. This can be interpreted as evidence for radial gas flows from the outskirts of these galaxies to the nucleus. Recent or ongoing galaxy interactions are likely responsible for this effect and may turn out to be a common prerequisite for QSO activity. The metallicity of bulge-dominated hosts are systematically lower than their disc-dominated counterparts, which we interpret as evidence for minor mergers, supported by our detailed study of the bulge-dominated host of the luminous QSO HE 1029-1401, or smooth gas accretion from the environment. In this line another new discovery is that HE 2158-0107 at z=0.218 is the most metal poor luminous QSO ever observed. Together with a large (30kpc) extended structure of low metallicity ionised gas, we propose smooth cold gas accretion as the most likely scenario. Theoretical studies suggested that this process is much more important at earlier epochs of the universe, so that HE 2158-0107 might be an ideal laboratory to study this mechanism of galaxy and BH growth at low redshift more detailed in the furture.
Supermassive black holes are a fundamental component of the universe in general and of galaxies in particular. Almost every massive galaxy harbours a supermassive black hole (SMBH) in its center. Furthermore, there is a close connection between the growth of the SMBH and the evolution of its host galaxy, manifested in the relationship between the mass of the black hole and various properties of the galaxy's spheroid component, like its stellar velocity dispersion, luminosity or mass. Understanding this relationship and the growth of SMBHs is essential for our picture of galaxy formation and evolution. In this thesis, I make several contributions to improve our knowledge on the census of SMBHs and on the coevolution of black holes and galaxies. The first route I follow on this road is to obtain a complete census of the black hole population and its properties. Here, I focus particularly on active black holes, observable as Active Galactic Nuclei (AGN) or quasars. These are found in large surveys of the sky. In this thesis, I use one of these surveys, the Hamburg/ESO survey (HES), to study the AGN population in the local volume (z~0). The demographics of AGN are traditionally represented by the AGN luminosity function, the distribution function of AGN at a given luminosity. I determined the local (z<0.3) optical luminosity function of so-called type 1 AGN, based on the broad band B_J magnitudes and AGN broad Halpha emission line luminosities, free of contamination from the host galaxy. I combined this result with fainter data from the Sloan Digital Sky Survey (SDSS) and constructed the best current optical AGN luminosity function at z~0. The comparison of the luminosity function with higher redshifts supports the current notion of 'AGN downsizing', i.e. the space density of the most luminous AGN peaks at higher redshifts and the space density of less luminous AGN peaks at lower redshifts. However, the AGN luminosity function does not reveal the full picture of active black hole demographics. This requires knowledge of the physical quantities, foremost the black hole mass and the accretion rate of the black hole, and the respective distribution functions, the active black hole mass function and the Eddington ratio distribution function. I developed a method for an unbiased estimate of these two distribution functions, employing a maximum likelihood technique and fully account for the selection function. I used this method to determine the active black hole mass function and the Eddington ratio distribution function for the local universe from the HES. I found a wide intrinsic distribution of black hole accretion rates and black hole masses. The comparison of the local active black hole mass function with the local total black hole mass function reveals evidence for 'AGN downsizing', in the sense that in the local universe the most massive black holes are in a less active stage then lower mass black holes. The second route I follow is a study of redshift evolution in the black hole-galaxy relations. While theoretical models can in general explain the existence of these relations, their redshift evolution puts strong constraints on these models. Observational studies on the black hole-galaxy relations naturally suffer from selection effects. These can potentially bias the conclusions inferred from the observations, if they are not taken into account. I investigated the issue of selection effects on type 1 AGN samples in detail and discuss various sources of bias, e.g. an AGN luminosity bias, an active fraction bias and an AGN evolution bias. If the selection function of the observational sample and the underlying distribution functions are known, it is possible to correct for this bias. I present a fitting method to obtain an unbiased estimate of the intrinsic black hole-galaxy relations from samples that are affected by selection effects. Third, I try to improve our census of dormant black holes and the determination of their masses. One of the most important techniques to determine the black hole mass in quiescent galaxies is via stellar dynamical modeling. This method employs photometric and kinematic observations of the galaxy and infers the gravitational potential from the stellar orbits. This method can reveal the presence of the black hole and give its mass, if the sphere of the black hole's gravitational influence is spatially resolved. However, usually the presence of a dark matter halo is ignored in the dynamical modeling, potentially causing a bias on the determined black hole mass. I ran dynamical models for a sample of 12 galaxies, including a dark matter halo. For galaxies for which the black hole's sphere of influence is not well resolved, I found that the black hole mass is systematically underestimated when the dark matter halo is ignored, while there is almost no effect for galaxies with well resolved sphere of influence.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
In the present work synchronization phenomena in complex dynamical systems exhibiting multiple time scales have been analyzed. Multiple time scales can be active in different manners. Three different systems have been analyzed with different methods from data analysis. The first system studied is a large heterogenous network of bursting neurons, that is a system with two predominant time scales, the fast firing of action potentials (spikes) and the burst of repetitive spikes followed by a quiescent phase. This system has been integrated numerically and analyzed with methods based on recurrence in phase space. An interesting result are the different transitions to synchrony found in the two distinct time scales. Moreover, an anomalous synchronization effect can be observed in the fast time scale, i.e. there is range of the coupling strength where desynchronization occurs. The second system analyzed, numerically as well as experimentally, is a pair of coupled CO₂ lasers in a chaotic bursting regime. This system is interesting due to its similarity with epidemic models. We explain the bursts by different time scales generated from unstable periodic orbits embedded in the chaotic attractor and perform a synchronization analysis of these different orbits utilizing the continuous wavelet transform. We find a diverse route to synchrony of these different observed time scales. The last system studied is a small network motif of limit cycle oscillators. Precisely, we have studied a hub motif, which serves as elementary building block for scale-free networks, a type of network found in many real world applications. These hubs are of special importance for communication and information transfer in complex networks. Here, a detailed study on the mechanism of synchronization in oscillatory networks with a broad frequency distribution has been carried out. In particular, we find a remote synchronization of nodes in the network which are not directly coupled. We also explain the responsible mechanism and its limitations and constraints. Further we derive an analytic expression for it and show that information transmission in pure phase oscillators, such as the Kuramoto type, is limited. In addition to the numerical and analytic analysis an experiment consisting of electrical circuits has been designed. The obtained results confirm the former findings.
This thesis covers the topic ”Thinning and Turbulence in Aqueous Films”. Experimental studies in two-dimensional systems gained an increasing amount of attention during the last decade. Thin liquid films serve as paradigms of atmospheric convection, thermal convection in the Earth’s mantle or turbulence in magnetohydrodynamics. Recent research on colloids, interfaces and nanofluids lead to advances in the developtment of micro-mixers (lab-on-a-chip devices). In this project a detailed description of a thin film experiment with focus on the particular surface forces is presented. The impact of turbulence on the thinning of liquid films which are oriented parallel to the gravitational force is studied. An experimental setup was developed which permits the capturing of thin film interference patterns under controlled surface and atmospheric conditions. The measurement setup also serves as a prototype of a mixer on the basis of thermally induced turbulence in liquid thin films with thicknesses in the nanometer range. The convection is realized by placing a cooled copper rod in the center of the film. The temperature gradient between the rod and the atmosphere results in a density gradient in the liquid film, so that different buoyancies generate turbulence. In the work at hand the thermally driven convection is characterized by a newly developed algorithm, named Cluster Imaging Velocimetry (CIV). This routine determines the flow relevant vector fields (velocity and deformation). On the basis of these insights the flow in the experiment was investigated with respect to its mixing properties. The mixing characteristics were compared to theoretical models and mixing efficiency of the flow scheme calculated. The gravitationally driven thinning of the liquid film was analyzed under the influence of turbulence. Strong shear forces lead to the generation of ultra-thin domains which consist of Newton black film. Due to the exponential expansion of the thin areas and the efficient mixing, this two-phase flow rapidly turns into the convection of only ultra-thin film. This turbulence driven transition was observed and quantified for the first time. The existence of stable convection in liquid nanofilms was proven for the first time in the context of this work.
Das Ziel dieser Arbeit ist die Untersuchung der aktiven Komponenten und ihrer Wechselwirkungen in teilorganischen Hybrid-Solarzellen. Diese bestehen aus einer dünnen Titandioxidschicht, kombiniert mit einer dünnen Polymerschicht. Die Effizienz der Hybrid-Solarzellen wird durch die Lichtabsorption im Polymer, die Dissoziation der gebildeten Exzitonen an der aktiven Grenzfläche zwischen TiO2 und Polymer, sowie durch Generation und Extraktion freier Ladungsträger bestimmt. Zur Optimierung der Solarzellen wurden grundlegende physikalische Wechselwirkungen zwischen den verwendeten Materialen sowie der Einfluss verschiedener Herstellungsparameter untersucht. Unter anderem wurden Fragen zum optimalen Materialeinsatz und Präparationsbedingungen beantwortet sowie grundlegende Einflüsse wie Schichtmorphologie und Polymerinfiltration näher betrachtet. Zunächst wurde aus unterschiedlich hergestelltem Titandioxid (Akzeptor-Schicht) eine Auswahl für den Einsatz in Hybrid-Solarzellen getroffen. Kriterium war hierbei die unterschiedliche Morphologie aufgrund der Oberflächenbeschaffenheit, der Film-Struktur, der Kristallinität und die daraus resultierenden Solarzelleneigenschaften. Für die anschließenden Untersuchungen wurden mesoporöse TiO2–Filme aus einer neuen Nanopartikel-Synthese, welche es erlaubt, kristalline Partikel schon während der Synthese herzustellen, als Elektronenakzeptor und konjugierte Polymere auf Poly(p-Phenylen-Vinylen) (PPV)- bzw. Thiophenbasis als Donatormaterial verwendet. Bei der thermischen Behandlung der TiO2-Schichten erfolgt eine temperaturabhängige Änderung der Morphologie, jedoch nicht der Kristallstruktur. Die Auswirkungen auf die Solarzelleneigenschaften wurden dokumentiert und diskutiert. Um die Vorteile der Nanopartikel-Synthese, die Bildung kristalliner TiO2-Partikel bei tiefen Temperaturen, nutzen zu können, wurden erste Versuche zur UV-Vernetzung durchgeführt. Neben der Beschaffenheit der Oxidschicht wurde auch der Einfluss der Polymermorphologie, bedingt durch Lösungsmittelvariation und Tempertemperatur, untersucht. Hierbei konnte gezeigt werden, dass u.a. die Viskosität der Polymerlösung die Infiltration in die TiO2-Schicht und dadurch die Effizienz der Solarzelle beeinflusst. Ein weiterer Ansatz zur Erhöhung der Effizienz ist die Entwicklung neuer lochleitender Polymere, welche möglichst über einen weiten spektralen Bereich Licht absorbieren und an die Bandlücke des TiO2 angepasst sind. Hierzu wurden einige neuartige Konzepte, z.B. die Kombination von Thiophen- und Phenyl-Einheiten näher untersucht. Auch wurde die Sensibilisierung der Titandioxidschicht in Anlehnung an die höheren Effizienzen der Farbstoffzellen in Betracht gezogen. Zusammenfassend konnten im Rahmen dieser Arbeit wichtige Einflussparameter auf die Funktion hybrider Solarzellen identifiziert und z.T. näher diskutiert werden. Für einige limitierende Faktoren wurden Konzepte zur Verbesserung bzw. Vermeidung vorgestellt.
The present thesis was born and evolved within the RAdial Velocity Experiment (RAVE) with the goal of measuring chemical abundances from the RAVE spectra and exploit them to investigate the chemical gradients along the plane of the Galaxy to provide constraints on possible Galactic formation scenarios. RAVE is a large spectroscopic survey which aims to observe spectroscopically ~10^6 stars by the end of 2012 and measures their radial velocities, atmospheric parameters and chemical abundances. The project makes use of the UK Schmidt telescope at Australian Astronomical Observatory (AAO) in Siding Spring, Australia, equipped with the multiobject spectrograph 6dF. To date, RAVE collected and measured more than 450,000 spectra. The precision of the chemical abundance estimations depends on the reliability of the atomic and atmosphere parameters adopted (in particular the oscillator strengths of the absorption lines and the effective temperature, gravity, and metallicity of the stars measured). Therefore we first identified 604 absorption lines in the RAVE wavelength range and refined their oscillator strengths with an inverse spectral analysis. Then, we improved the RAVE stellar parameters by modifying the RAVE pipeline and the spectral library the pipeline rely on. The modifications removed some systematic errors in stellar parameters discovered during this work. To obtain chemical abundances, we developed two different processing pipelines. Both of them perform chemical abundances measurements by assuming stellar atmospheres in Local Thermodynamic Equilibrium (LTE). The first one determines elements abundances from equivalent widths of absorption lines. Since this pipeline showed poor sensibility on abundances relative to iron, it has been superseded. The second one exploits the chi^2 minimization technique between observed and model spectra. Thanks to its precision, it has been adopted for the creation of the RAVE chemical catalogue. This pipeline provides abundances with uncertains of about ~0.2dex for spectra with signal-to-noise ratio S/N>40 and ~0.3dex for spectra with 20>S/N>40. For this work, the pipeline measured chemical abundances up to 7 elements for 217,358 RAVE stars. With these data we investigated the chemical gradients along the Galactic radius of the Milky Way. We found that stars with low vertical velocities |W| (which stay close to the Galactic plane) show an iron abundance gradient in agreement with previous works (~-0.07$ dex kpc^-1) whereas stars with larger |W| which are able to reach larger heights above the Galactic plane, show progressively flatter gradients. The gradients of the other elements follow the same trend. This suggests that an efficient radial mixing acts in the Galaxy or that the thick disk formed from homogeneous interstellar matter. In particular, we found hundreds of stars which can be kinetically classified as thick disk stars exhibiting a chemical composition typical of the thin disk. A few stars of this kind have already been detected by other authors, and their origin is still not clear. One possibility is that they are thin disk stars kinematically heated, and then underwent an efficient radial mixing process which blurred (and so flattened) the gradient. Alternatively they may be a transition population" which represents an evolutionary bridge between thin and thick disk. Our analysis shows that the two explanations are not mutually exclusive. Future follow-up high resolution spectroscopic observations will clarify their role in the Galactic disk evolution.
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
The recent discovery of an intricate and nontrivial interaction topology among the elements of a wide range of natural systems has altered the manner we understand complexity. For example, the axonal fibres transmitting electrical information between cortical regions form a network which is neither regular nor completely random. Their structure seems to follow functional principles to balance between segregation (functional specialisation) and integration. Cortical regions are clustered into modules specialised in processing different kinds of information, e.g. visual or auditory. However, in order to generate a global perception of the real world, the brain needs to integrate the distinct types of information. Where this integration happens, nobody knows. We have performed an extensive and detailed graph theoretical analysis of the cortico-cortical organisation in the brain of cats, trying to relate the individual and collective topological properties of the cortical areas to their function. We conclude that the cortex possesses a very rich communication structure, composed of a mixture of parallel and serial processing paths capable of accommodating dynamical processes with a wide variety of time scales. The communication paths between the sensory systems are not random, but largely mediated by a small set of areas. Far from acting as mere transmitters of information, these central areas are densely connected to each other, strongly indicating their functional role as integrators of the multisensory information. In the quest of uncovering the structure-function relationship of cortical networks, the peculiarities of this network have led us to continuously reconsider the stablished graph measures. For example, a normalised formalism to identify the “functional roles” of vertices in networks with community structure is proposed. The tools developed for this purpose open the door to novel community detection techniques which may also characterise the overlap between modules. The concept of integration has been revisited and adapted to the necessities of the network under study. Additionally, analytical and numerical methods have been introduced to facilitate understanding of the complicated statistical interrelations between the distinct network measures. These methods are helpful to construct new significance tests which may help to discriminate the relevant properties of real networks from side-effects of the evolutionary-growth processes.
Der Einfluss der Dynamik auf die stratosphärische Ozonvariabilität über der Arktis im Frühwinter
(2010)
Der frühwinterliche Ozongehalt ist ein Indikator für den Ozongehalt im Spätwinter/Frühjahr. Jedoch weist dieser aufgrund von Absinkprozessen, chemisch bedingten Ozonabbau und Wellenaktivität von Jahr zu Jahr starke Schwankungen auf. Die vorliegende Arbeit zeigt, dass diese Variabilität weitestgehend auf dynamische Prozesse während der Wirbelbildungsphase des arktischen Polarwirbels zurückgeht. Ferner wird der bisher noch ausstehende Zusammenhang zwischen dem früh- und spätwinterlichen Ozongehalt bezüglich Dynamik und Chemie aufgezeigt. Für die Untersuchung des Zusammenhangs zwischen der im Polarwirbel eingeschlossenen Luftmassenzusammensetzung und Ozonmenge wurden Beobachtungsdaten von Satellitenmessinstrumenten und Ozonsonden sowie Modellsimulationen des Lagrangschen Chemie/Transportmodells ATLAS verwandt. Die über die Fläche (45–75°N) und Zeit (August-November) gemittelte Vertikalkomponente des Eliassen-Palm-Flussvektors durch die 100hPa-Fläche zeigt eine Verbindung zwischen der frühwinterlichen wirbelinneren Luftmassenzusammensetzung und der Wirbelbildungsphase auf. Diese ist jedoch nur für die untere Stratosphäre gültig, da die Vertikalkomponente die sich innerhalb der Stratosphäre ändernden Wellenausbreitungsbedingungen nicht erfasst. Für eine verbesserte Höhendarstellung des Signals wurde eine neue integrale auf der Wellenamplitude und dem Charney-Drazin-Kriterium basierende Größe definiert. Diese neue Größe verbindet die Wellenaktivität während der Wirbelbildungsphase sowohl mit der Luftmassenzusammensetzung im Polarwirbel als auch mit der Ozonverteilung über die Breite. Eine verstärkte Wellenaktivität führt zu mehr Luft aus niedrigeren ozonreichen Breiten im Polarwirbel. Aber im Herbst und Frühwinter zerstören chemische Prozesse, die das Ozon ins Gleichgewicht bringen, die interannuale wirbelinnere Ozonvariablität, die durch dynamische Prozesse während der arktischen Polarwirbelbildungsphase hervorgerufen wird. Eine Analyse in Hinblick auf den Fortbestand einer dynamisch induzierten Ozonanomalie bis in den Mittwinter ermöglicht eine Abschätzung des Einflusses dieser dynamischen Prozesse auf den arktischen Ozongehalt. Zu diesem Zweck wurden für den Winter 1999–2000 Modellläufe mit dem Lagrangesche Chemie/Transportmodell ATLAS gerechnet, die detaillierte Informationen über den Erhalt der künstlichen Ozonvariabilität hinsichtlich Zeit, Höhe und Breite liefern. Zusammengefasst, besteht die dynamisch induzierte Ozonvariabilität während der Wirbelbildungsphase länger im Inneren als im Äußeren des Polarwirbels und verliert oberhalb von 750K potentieller Temperatur ihre signifikante Wirkung auf die mittwinterliche Ozonvariabilität. In darunterliegenden Höhenbereichen ist der Anteil an der ursprünglichen Störung groß, bis zu 90% auf der 450K. Innerhalb dieses Höhenbereiches üben die dynamischen Prozesse während der Wirbelbildungsphase einen entscheidenden Einfluss auf den Ozongehalt im Mittwinter aus.
Soft nanocomposites with enhanced electromechanical response for dielectric elastomer actuators
(2011)
Electromechanical transducers based on elastomer capacitors are presently considered for many soft actuation applications, due to their large reversible deformation in response to electric field induced electrostatic pressure. The high operating voltage of such devices is currently a large drawback, hindering their use in applications such as biomedical devices and biomimetic robots, however, they could be improved with a careful design of their material properties. The main targets for improving their properties are increasing the relative permittivity of the active material, while maintaining high electric breakdown strength and low stiffness, which would lead to enhanced electrostatic storage ability and hence, reduced operating voltage. Improvement of the functional properties is possible through the use of nanocomposites. These exploit the high surface-to-volume ratio of the nanoscale filler, resulting in large effects on macroscale properties. This thesis explores several strategies for nanomaterials design. The resulting nanocomposites are fully characterized with respect to their electrical and mechanical properties, by use of dielectric spectroscopy, tensile mechanical analysis, and electric breakdown tests. First, nanocomposites consisting of high permittivity rutile TiO2 nanoparticles dispersed in thermoplastic block copolymer SEBS (poly-styrene-coethylene-co-butylene-co-styrene) are shown to exhibit permittivity increases of up to 3.7 times, leading to 5.6 times improvement in electrostatic energy density, but with a trade-off in mechanical properties (an 8-fold increase in stiffness). The variation in both electrical and mechanical properties still allows for electromechanical improvement, such that a 27 % reduction of the electric field is found compared to the pure elastomer. Second, it is shown that the use of nanofiller conductive particles (carbon black (CB)) can lead to a strong increase of relative permittivity through percolation, however, with detrimental side effects. These are due to localized enhancement of the electric field within the composite, which leads to sharp reductions in electric field strength. Hence, the increase in permittivity does not make up for the reduction in breakdown strength in relation to stored electrical energy, which may prohibit their practical use. Third, a completely new approach for increasing the relative permittivity and electrostatic energy density of a polymer based on 'molecular composites' is presented, relying on chemically grafting soft π-conjugated macromolecules to a flexible elastomer backbone. Polarization caused by charge displacement along the conjugated backbone is found to induce a large and controlled permittivity enhancement (470 % over the elastomer matrix), while chemical bonding, encapsulates the PANI chains manifesting in hardly any reduction in electric breakdown strength, and hence resulting in a large increase in stored electrostatic energy. This is shown to lead to an improvement in the sensitivity of the measured electromechanical response (83 % reduction of the driving electric field) as well as in the maximum actuation strain (250 %). These results represent a large step forward in the understanding of the strategies which can be employed to obtain high permittivity polymer materials with practical use for electro-elastomer actuation.
Phase Space Reconstruction is a method that allows to reconstruct the phase space of a system using only an one dimensional time series as input. It can be used for calculating Lyapunov-exponents and detecting chaos. It helps to understand complex dynamics and their behavior. And it can reproduce datasets which were not measured. There are many different methods which produce correct reconstructions such as time-delay, Hilbert-transformation, derivation and integration. The most used one is time-delay but all methods have special properties which are useful in different situations. Hence, every reconstruction method has some situations where it is the best choice. Looking at all these different methods the questions are: Why can all these different looking methods be used for the same purpose? Is there any connection between all these functions? The answer is found in the frequency domain : Performing a Fourier transformation all these methods getting a similar shape: Every presented reconstruction method can be described as a multiplication in the frequency domain with a frequency-depending reconstruction function. This structure is also known as a filter. From this point of view every reconstructed dimension can be seen as a filtered version of the measured time series. It contains the original data but applies just a new focus: Some parts are amplified and other parts are reduced. Furthermore I show, that not every function can be used for reconstruction. In the thesis three characteristics are identified, which are mandatory for the reconstruction function. Under consideration of these restrictions one gets a whole bunch of new reconstruction functions. So it is possible to reduce noise within the reconstruction process itself or to use some advantages of already known reconstructions methods while suppressing unwanted characteristics of it.
The Greenland Ice Sheet (GIS) contains enough water volume to raise global sea level by over 7 meters. It is a relic of past glacial climates that could be strongly affected by a warming world. Several studies have been performed to investigate the sensitivity of the ice sheet to changes in climate, but large uncertainties in its long-term response still exist. In this thesis, a new approach has been developed and applied to modeling the GIS response to climate change. The advantages compared to previous approaches are (i) that it can be applied over a wide range of climatic scenarios (both in the deep past and the future), (ii) that it includes the relevant feedback processes between the climate and the ice sheet and (iii) that it is highly computationally efficient, allowing simulations over very long timescales. The new regional energy-moisture balance model (REMBO) has been developed to model the climate and surface mass balance over Greenland and it represents an improvement compared to conventional approaches in modeling present-day conditions. Furthermore, the evolution of the GIS has been simulated over the last glacial cycle using an ensemble of model versions. The model performance has been validated against field observations of the present-day climate and surface mass balance, as well as paleo information from ice cores. The GIS contribution to sea level rise during the last interglacial is estimated to be between 0.5-4.1 m, consistent with previous estimates. The ensemble of model versions has been constrained to those that are consistent with the data, and a range of valid parameter values has been defined, allowing quantification of the uncertainty and sensitivity of the modeling approach. Using the constrained model ensemble, the sensitivity of the GIS to long-term climate change was investigated. It was found that the GIS exhibits hysteresis behavior (i.e., it is multi-stable under certain conditions), and that a temperature threshold exists above which the ice sheet transitions to an essentially ice-free state. The threshold in the global temperature is estimated to be in the range of 1.3-2.3°C above preindustrial conditions, significantly lower than previously believed. The timescale of total melt scales non-linearly with the overshoot above the temperature threshold, such that a 2°C anomaly causes the ice sheet to melt in ca. 50,000 years, but an anomaly of 6°C will melt the ice sheet in less than 4,000 years. The meltback of the ice sheet was found to become irreversible after a fraction of the ice sheet is already lost – but this level of irreversibility also depends on the temperature anomaly.
In der vorliegenden Dissertation wird eine Beschreibung der Phasendynamik irregulärer Oszillationen und deren Wechselwirkungen vorgestellt. Hierbei werden chaotische und stochastische Oszillationen autonomer dissipativer Systeme betrachtet. Für eine Phasenbeschreibung stochastischer Oszillationen müssen zum einen unterschiedliche Werte der Phase zueinander in Beziehung gesetzt werden, um ihre Dynamik unabhängig von der gewählten Parametrisierung der Oszillation beschreiben zu können. Zum anderen müssen für stochastische und chaotische Oszillationen diejenigen Systemzustände identifiziert werden, die sich in der gleichen Phase befinden. Im Rahmen dieser Dissertation werden die Werte der Phase über eine gemittelte Phasengeschwindigkeitsfunktion miteinander in Beziehung gesetzt. Für stochastische Oszillationen sind jedoch verschiedene Definitionen der mittleren Geschwindigkeit möglich. Um die Unterschiede der Geschwindigkeitsdefinitionen besser zu verstehen, werden auf ihrer Basis effektive deterministische Modelle der Oszillationen konstruiert. Hierbei zeigt sich, dass die Modelle unterschiedliche Oszillationseigenschaften, wie z. B. die mittlere Frequenz oder die invariante Wahrscheinlichkeitsverteilung, nachahmen. Je nach Anwendung stellt die effektive Phasengeschwindigkeitsfunktion eines speziellen Modells eine zweckmäßige Phasenbeziehung her. Wie anhand einfacher Beispiele erklärt wird, kann so die Theorie der effektiven Phasendynamik auch kontinuierlich und pulsartig wechselwirkende stochastische Oszillationen beschreiben. Weiterhin wird ein Kriterium für die invariante Identifikation von Zuständen gleicher Phase irregulärer Oszillationen zu sogenannten generalisierten Isophasen beschrieben: Die Zustände einer solchen Isophase sollen in ihrer dynamischen Entwicklung ununterscheidbar werden. Für stochastische Oszillationen wird dieses Kriterium in einem mittleren Sinne interpretiert. Wie anhand von Beispielen demonstriert wird, lassen sich so verschiedene Typen stochastischer Oszillationen in einheitlicher Weise auf eine stochastische Phasendynamik reduzieren. Mit Hilfe eines numerischen Algorithmus zur Schätzung der Isophasen aus Daten wird die Anwendbarkeit der Theorie anhand eines Signals regelmäßiger Atmung gezeigt. Weiterhin zeigt sich, dass das Kriterium der Phasenidentifikation für chaotische Oszillationen nur approximativ erfüllt werden kann. Anhand des Rössleroszillators wird der tiefgreifende Zusammenhang zwischen approximativen Isophasen, chaotischer Phasendiffusion und instabilen periodischen Orbits dargelegt. Gemeinsam ermöglichen die Theorien der effektiven Phasendynamik und der generalisierten Isophasen eine umfassende und einheitliche Phasenbeschreibung irregulärer Oszillationen.
Complex network theory provides an elegant and powerful framework to statistically investigate the topology of local and long range dynamical interrelationships, i.e., teleconnections, in the climate system. Employing a refined methodology relying on linear and nonlinear measures of time series analysis, the intricate correlation structure within a multivariate climatological data set is cast into network form. Within this graph theoretical framework, vertices are identified with grid points taken from the data set representing a region on the the Earth's surface, and edges correspond to strong statistical interrelationships between the dynamics on pairs of grid points. The resulting climate networks are neither perfectly regular nor completely random, but display the intriguing and nontrivial characteristics of complexity commonly found in real world networks such as the internet, citation and acquaintance networks, food webs and cortical networks in the mammalian brain. Among other interesting properties, climate networks exhibit the "small-world" effect and possess a broad degree distribution with dominating super-nodes as well as a pronounced community structure. We have performed an extensive and detailed graph theoretical analysis of climate networks on the global topological scale focussing on the flow and centrality measure betweenness which is locally defined at each vertex, but includes global topological information by relying on the distribution of shortest paths between all pairs of vertices in the network. The betweenness centrality field reveals a rich internal structure in complex climate networks constructed from reanalysis and atmosphere-ocean coupled general circulation model (AOGCM) surface air temperature data. Our novel approach uncovers an elaborately woven meta-network of highly localized channels of strong dynamical information flow, that we relate to global surface ocean currents and dub the backbone of the climate network in analogy to the homonymous data highways of the internet. This finding points to a major role of the oceanic surface circulation in coupling and stabilizing the global temperature field in the long term mean (140 years for the model run and 60 years for reanalysis data). Carefully comparing the backbone structures detected in climate networks constructed using linear Pearson correlation and nonlinear mutual information, we argue that the high sensitivity of betweenness with respect to small changes in network structure may allow to detect the footprints of strongly nonlinear physical interactions in the climate system. The results presented in this thesis are thoroughly founded and substantiated using a hierarchy of statistical significance tests on the level of time series and networks, i.e., by tests based on time series surrogates as well as network surrogates. This is particularly relevant when working with real world data. Specifically, we developed new types of network surrogates to include the additional constraints imposed by the spatial embedding of vertices in a climate network. Our methodology is of potential interest for a broad audience within the physics community and various applied fields, because it is universal in the sense of being valid for any spatially extended dynamical system. It can help to understand the localized flow of dynamical information in any such system by combining multivariate time series analysis, a complex network approach and the information flow measure betweenness centrality. Possible fields of application include fluid dynamics (turbulence), plasma physics and biological physics (population models, neural networks, cell models). Furthermore, the climate network approach is equally relevant for experimental data as well as model simulations and hence introduces a novel perspective on model evaluation and data driven model building. Our work is timely in the context of the current debate on climate change within the scientific community, since it allows to assess from a new perspective the regional vulnerability and stability of the climate system while relying on global and not only on regional knowledge. The methodology developed in this thesis hence has the potential to substantially contribute to the understanding of the local effect of extreme events and tipping points in the earth system within a holistic global framework.
Auf der Grundlage von Sonnenphotometermessungen an drei Messstationen (AWIPEV/ Koldewey in Ny-Ålesund (78.923 °N, 11.923 °O) 1995–2008, 35. Nordpol Driftstation – NP-35 (84.3–85.5 °N, 41.7–56.6 °O) März/April 2008, Sodankylä (67.37 °N, 26.65 °O) 2004–2007) wird die Aerosolvariabilität in der europäischen Arktis und deren Ursachen untersucht. Der Schwerpunkt liegt dabei auf der Frage des Zusammenhanges zwischen den an den Stationen gemessenen Aerosolparametern (Aerosol optische Dicke, Angström Koeffizient, usw.) und dem Transport des Aerosols sowohl auf kurzen Zeitskalen (Tagen) als auch auf langen Zeitskalen (Monate, Jahre). Um diesen Zusammenhang herzustellen, werden für die kurzen Zeitskalen mit dem Trajektorienmodell PEP-Tracer 5-Tage Rückwärtstrajektorien in drei Starthöhen (850 hPa, 700 hPa, 500 hPa) für die Uhrzeiten 00, 06, 12 und 18 Uhr berechnet. Mit Hilfe der nicht-hierarchischen Clustermethode k-means werden die berechneten Rückwärtstrajektorien dann zu Gruppen zusammengefasst und bestimmten Quellgebieten und den gemessenen Aerosol optischen Dicken zugeordnet. Die Zuordnung von Aerosol optischer Dicke und Quellregion ergibt keinen eindeutigen Zusammenhang zwischen dem Transport verschmutzter Luftmassen aus Europa oder Russland bzw. Asien und erhöhter Aerosol optischer Dicke. Dennoch ist für einen konkreten Einzelfall (März 2008) ein direkter Zusammenhang von Aerosoltransport und hohen Aerosol optischen Dicken nachweisbar. In diesem Fall gelangte Waldbrandaerosol aus Südwestrussland in die Arktis und konnte sowohl auf der NP-35 als auch in Ny-Ålesund beobachtet werden. In einem weiteren Schritt wird mit Hilfe der EOF-Analyse untersucht, inwieweit großskalige atmosphärische Zirkulationsmuster für die Aerosolvariabilität in der europäischen Arktis verantwortlich sind. Ähnlich wie bei der Trajektorienanalyse ist auch die Verbindung der atmosphärischen Zirkulation zu den Photometermessungen an den Stationen in der Regel nur schwach ausgeprägt. Eine Ausnahme findet sich bei der Betrachtung des Jahresganges des Bodendruckes und der Aerosol optischen Dicke. Hohe Aerosol optische Dicken treten im Frühjahr zum einen dann auf, wenn durch das Islandtief und das sibirische Hochdruckgebiet Luftmassen aus Europa oder Russland/Asien in die Arktis gelangen, und zum anderen, wenn sich ein kräftiges Hochdruckgebiet über Grönland und weiten Teilen der Arktis befindet. Ebenso zeigt sich, dass der Übergang zwischen Frühjahr und Sommer zumindest teilweise bedingt ist durch denWechsel vom stabilen Polarhoch im Winter und Frühjahr zu einer stärker von Tiefdruckgebieten bestimmten arktischen Atmosphäre im Sommer. Die geringere Aerosolkonzentration im Sommer kann zum Teil mit einer Zunahme der nassen Deposition als Aerosolsenke begründet werden. Für Ny-Ålesund wird neben den Transportmustern auch die chemische Zusammensetzung des Aerosols mit Hilfe von Impaktormessungen an der Zeppelinstation auf dem Zeppelinberg (474m ü.NN) nahe Ny-Ålesund abgeleitet. Dabei ist die positive Korrelation der Aerosoloptischen Dicke mit der Konzentration von Sulfationen und Ruß sehr deutlich. Beide Stoffe gelangen zu einem Großteil durch anthropogene Emissionen in die Atmosphäre. Die damit nachweisbar anthropogen geprägte Zusammensetzung des arktischen Aerosols steht im Widerspruch zum nicht eindeutig herstellbaren Zusammenhang mit dem Transport des Aerosols aus Industrieregionen. Dies kann nur durch einen oder mehrere gleichzeitig stattfindende Transformationsprozesse (z. B. Nukleation von Schwefelsäurepartikeln) während des Transportes aus den Quellregionen (Europa, Russland) erklärt werden.
The presented work describes new concepts of fast switching elements based on principles of photonics. The waveguides working in visible and infra-red ranges are put in a basis of these elements. And as materials for manufacturing of waveguides the transparent polymers, dopped by molecules of the dyes possessing second order nonlinear-optical properties are proposed. The work shows how nonlinear-optical processes in such structures can be implemented by electro-optical and opto-optical control circuit signals. In this paper we consider the complete cycle of fabrication of several types of integral photonic elements. The theoretical analysis of high-intensity beam propagation in media with second-order optical nonlinearity is performed. Quantitative estimations of necessary conditions of occurrence of the nonlinear-optical phenomena of the second order taking into account properties of used materials are made. The paper describes the various stages of manufacture of the basic structure of the integrated photonics: a planar waveguide. Using the finite element method the structure of the electromagnetic field inside the waveguide in different modes was analysed. A separate part of the work deals with the creation of composite organic materials with high optical nonlinearity. Using the methods of quantum chemistry, the dependence of nonlinear properties of dye molecules from its structure were investigated in details. In addition, the paper discusses various methods of inducing of an optical nonlinearity in dye-doping of polymer films. In the work, for the first time is proposed the use of spatial modulation of nonlinear properties of waveguide according Fibonacci law. This allows involving several different nonlinear optical processes simultaneously. The final part of the work describes various designs of integrated optical modulators and switches constructed of organic nonlinear optical waveguides. A practical design of the optical modulator based on Mach-Zehnder interferometer made by a photolithography on polymer film is presented.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
Preparation and investigation of polymer-foam films and polymer-layer systems for ferroelectrets
(2010)
Piezoelectric materials are very useful for applications in sensors and actuators. In addition to traditional ferroelectric ceramics and ferroelectric polymers, ferroelectrets have recently become a new group of piezoelectrics. Ferroelectrets are functional polymer systems for electromechanical transduction, with elastically heterogeneous cellular structures and internal quasi-permanent dipole moments. The piezoelectricity of ferroelectrets stems from linear changes of the dipole moments in response to external mechanical or electrical stress. Over the past two decades, polypropylene (PP) foams have been investigated with the aim of ferroelectret applications, and some products are already on the market. PP-foam ferroelectrets may exhibit piezoelectric d33 coefficients of 600 pC/N and more. Their operating temperature can, however, not be much higher than 60 °C. Recently developed polyethylene-terephthalate (PET) and cyclo-olefin copolymer (COC) foam ferroelectrets show slightly better d33 thermal stabilities, but usually at the price of smaller d33 values. Therefore, the main aim of this work is the development of new thermally stable ferroelectrets with appreciable piezoelectricity. Physical foaming is a promising technique for generating polymer foams from solid films without any pollution or impurity. Supercritical carbon dioxide (CO2) or nitrogen (N2) are usually employed as foaming agents due to their good solubility in several polymers. Polyethylene propylene (PEN) is a polyester with slightly better properties than PET. A “voiding + inflation + stretching” process has been specifically developed to prepare PEN foams. Solid PEN films are saturated with supercritical CO2 at high pressure and then thermally voided at high temperatures. Controlled inflation (Gas-Diffusion Expansion or GDE) is applied in order to adjust the void dimensions. Additional biaxial stretching decreases the void heights, since it is known lens-shaped voids lead to lower elastic moduli and therefore also to stronger piezoelectricity. Both, contact and corona charging are suitable for the electric charging of PEN foams. The light emission from the dielectric-barrier discharges (DBDs) can be clearly observed. Corona charging in a gas of high dielectric strength such as sulfur hexafluoride (SF6) results in higher gas-breakdown strength in the voids and therefore increases the piezoelectricity. PEN foams can exhibit piezoelectric d33 coefficients as high as 500 pC/N. Dielectric-resonance spectra show elastic moduli c33 of 1 − 12 MPa, anti-resonance frequencies of 0.2 − 0.8 MHz, and electromechanical coupling factors of 0.016 − 0.069. As expected, it is found that PEN foams show better thermal stability than PP and PET. Samples charged at room temperature can be utilized up to 80 − 100 °C. Annealing after charging or charging at elevated temperatures may improve thermal stabilities. Samples charged at suitable elevated temperatures show working temperatures as high as 110 − 120 °C. Acoustic measurements at frequencies of 2 Hz − 20 kHz show that PEN foams can be well applied in this frequency range. Fluorinated ethylene-propylene (FEP) copolymers are fluoropolymers with very good physical, chemical and electrical properties. The charge-storage ability of solid FEP films can be significantly improved by adding boron nitride (BN) filler particles. FEP foams are prepared by means of a one-step procedure consisting of CO2 saturation and subsequent in-situ high-temperature voiding. Piezoelectric d33 coefficients up to 40 pC/N are measured on such FEP foams. Mechanical fatigue tests show that the as-prepared PEN and FEP foams are mechanically stable for long periods of time. Although polymer-foam ferroelectrets have a high application potential, their piezoelectric properties strongly depend on the cellular morphology, i.e. on size, shape, and distribution of the voids. On the other hand, controlled preparation of optimized cellular structures is still a technical challenge. Consequently, new ferroelectrets based on polymer-layer system (sandwiches) have been prepared from FEP. By sandwiching an FEP mesh between two solid FEP films and fusing the polymer system with a laser beam, a well-designed uniform macroscopic cellular structure can be formed. Dielectric resonance spectroscopy reveals piezoelectric d33 coefficients as high as 350 pC/N, elastic moduli of about 0.3 MPa, anti-resonance frequencies of about 30 kHz, and electromechanical coupling factors of about 0.05. Samples charged at elevated temperatures show better thermal stabilities than those charged at room temperature, and the higher the charging temperature, the better is the stability. After proper charging at 140 °C, the working temperatures can be as high as 110 − 120 °C. Acoustic measurements at frequencies of 200 Hz − 20 kHz indicate that the FEP layer systems are suitable for applications at least in this range.
The availability of large data sets has allowed researchers to uncover complex properties in complex systems, such as complex networks and human dynamics. A vast number of systems, from the Internet to the brain, power grids, ecosystems, can be represented as large complex networks. Dynamics on and of complex networks has attracted more and more researchers’ interest. In this thesis, first, I introduced a simple but effective dynamical optimization coupling scheme which can realize complete synchronization in networks with undelayed and delayed couplings and enhance the small-world and scale-free networks’ synchronizability. Second, I showed that the robustness of scale-free networks with community structure was enhanced due to the existence of communities in the networks and some of the response patterns were found to coincide with topological communities. My results provide insights into the relationship between network topology and the functional organization in complex networks from another viewpoint. Third, as an important kind of nodes of complex networks, human detailed correspondence dynamics was studied by both data and the model. A new and general type of human correspondence pattern was found and an interacting priority-queues model was introduced to explain it. The model can also embrace a range of realistic social interacting systems such as email and letter communication. My findings provide insight into various human activities both at the individual and network level. Fourth, I present clearly new evidence that human comment behavior in on-line social systems, a different type of interacting human dynamics, is non-Poissonian and a model based on the personal attraction was introduced to explain it. These results are helpful for discovering regular patterns of human behavior in on-line society and the evolution of the public opinion on the virtual as well as real society. Finally, there are conclusion and outlook of human dynamics and complex networks.
Ziel dieser Arbeit ist die Überwindung einer Differenz, die zwischen der Theorie der Phase bzw. der Phasendynamik und ihrer Anwendung in der Zeitreihenanalyse besteht: Während die theoretische Phase eindeutig bestimmt und invariant unter Koordinatentransformationen bzw. gegenüber der jeweils gewählten Observable ist, führen die Standardmethoden zur Abschätzung der Phase aus gegebenen Zeitreihen zu Resultaten, die einerseits von den gewählten Observablen abhängen und so andererseits das jeweilige System keineswegs in eindeutiger und invarianter Weise beschreiben. Um diese Differenz deutlich zu machen, wird die terminologische Unterscheidung von Phase und Protophase eingeführt: Der Terminus Phase wird nur für Variablen verwendet, die dem theoretischen Konzept der Phase entsprechen und daher das jeweilige System in invarianter Weise charakterisieren, während die observablen-abhängigen Abschätzungen der Phase aus Zeitreihen als Protophasen bezeichnet werden. Der zentrale Gegenstand dieser Arbeit ist die Entwicklung einer deterministischen Transformation, die von jeder Protophase eines selbsterhaltenden Oszillators zur eindeutig bestimmten Phase führt. Dies ermöglicht dann die invariante Beschreibung gekoppelter Oszillatoren und ihrer Wechselwirkung. Die Anwendung der Transformation bzw. ihr Effekt wird sowohl an numerischen Beispielen demonstriert - insbesondere wird die Phasentransformation in einem Beispiel auf den Fall von drei gekoppelten Oszillatoren erweitert - als auch an multivariaten Messungen des EKGs, des Pulses und der Atmung, aus denen Phasenmodelle der kardiorespiratorischen Wechselwirkung rekonstruiert werden. Abschließend wird die Phasentransformation für autonome Oszillatoren auf den Fall einer nicht vernachlässigbaren Amplitudenabhängigkeit der Protophase erweitert, was beispielsweise die numerischen Bestimmung der Isochronen des chaotischen Rössler Systems ermöglicht.
Many cellular processes require decision making mechanisms, which must act reliably even in the unavoidable presence of substantial amounts of noise. However, the multistable genetic switches that underlie most decision-making processes are dominated by fluctuations that can induce random jumps between alternative cellular states. Here we show, via theoretical modeling of a population of noise-driven bistable genetic switches, that reliable timing of decision-making processes can be accomplished for large enough population sizes, as long as cells are globally coupled by chemical means. In the light of these results, we conjecture that cell proliferation, in the presence of cell-cell communication, could provide a mechanism for reliable decision making in the presence of noise, by triggering cellular transitions only when the whole cell population reaches a certain size. In other words , the summation performed by the cell population would average out the noise and reduce its detrimental impact.
This thesis is concerned with the development of numerical methods using finite difference techniques for the discretization of initial value problems (IVPs) and initial boundary value problems (IBVPs) of certain hyperbolic systems which are first order in time and second order in space. This type of system appears in some formulations of Einstein equations, such as ADM, BSSN, NOR, and the generalized harmonic formulation. For IVP, the stability method proposed in [14] is extended from second and fourth order centered schemes, to 2n-order accuracy, including also the case when some first order derivatives are approximated with off-centered finite difference operators (FDO) and dissipation is added to the right-hand sides of the equations. For the model problem of the wave equation, special attention is paid to the analysis of Courant limits and numerical speeds. Although off-centered FDOs have larger truncation errors than centered FDOs, it is shown that in certain situations, off-centering by just one point can be beneficial for the overall accuracy of the numerical scheme. The wave equation is also analyzed in respect to its initial boundary value problem. All three types of boundaries - outflow, inflow and completely inflow that can appear in this case, are investigated. Using the ghost-point method, 2n-accurate (n = 1, 4) numerical prescriptions are prescribed for each type of boundary. The inflow boundary is also approached using the SAT-SBP method. In the end of the thesis, a 1-D variant of BSSN formulation is derived and some of its IBVPs are considered. The boundary procedures, based on the ghost-point method, are intended to preserve the interior 2n-accuracy. Numerical tests show that this is the case if sufficient dissipation is added to the rhs of the equations.
Coupling of the electrical, mechanical and optical response in polymer/liquid-crystal composites
(2010)
Micrometer-sized liquid-crystal (LC) droplets embedded in a polymer matrix may enable optical switching in the composite film through the alignment of the LC director along an external electric field. When a ferroelectric material is used as host polymer, the electric field generated by the piezoelectric effect can orient the director of the LC under an applied mechanical stress, making these materials interesting candidates for piezo-optical devices. In this work, polymer-dispersed liquid crystals (PDLCs) are prepared from poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) and a nematic liquid crystal (LC). The anchoring effect is studied by means of dielectric relaxation spectroscopy. Two dispersion regions are observed in the dielectric spectra of the pure P(VDF-TrFE) film. They are related to the glass transition and to a charge-carrier relaxation, respectively. In PDLC films containing 10 and 60 wt% LC, an additional, bias-field-dependent relaxation peak is found that can be attributed to the motion of LC molecules. Due to the anchoring effect of the LC molecules, this relaxation process is slowed down considerably, when compared with the related process in the pure LC. The electro-optical and piezo-optical behavior of PDLC films containing 10 and 60 wt% LCs is investigated. In addition to the refractive-index mismatch between the polymer matrix and the LC molecules, the interaction between the polymer dipoles and the LC molecules at the droplet interface influences the light-scattering behavior of the PDLC films. For the first time, it was shown that the electric field generated by the application of a mechanical stress may lead to changes in the transmittance of a PDLC film. Such a piezo-optical PDLC material may be useful e.g. in sensing and visualization applications. Compared to a non-polar matrix polymer, the polar matrix polymer exhibits a strong interaction with the LC molecules at the polymer/LC interface which affects the electro-optical effect of the PDLC films and prevents a larger increase in optical transmission.
Due to the unique environmental conditions and different feedback mechanisms, the Arctic region is especially sensitive to climate changes. The influence of clouds on the radiation budget is substantial, but difficult to quantify and parameterize in models. In the framework of the PhD, elastic backscatter and depolarization lidar observations of Arctic clouds were performed during the international Arctic Study of Tropospheric Aerosol, Clouds and Radiation (ASTAR) from Svalbard in March and April 2007. Clouds were probed above the inaccessible Arctic Ocean with a combination of airborne instruments: The Airborne Mobile Aerosol Lidar (AMALi) of the Alfred Wegener Institute for Polar and Marine Research provided information on the vertical and horizontal extent of clouds along the flight track, optical properties (backscatter coefficient), and cloud thermodynamic phase. From the data obtained by the spectral albedometer (University of Mainz), the cloud phase and cloud optical thickness was deduced. Furthermore, in situ observations with the Polar Nephelometer, Cloud Particle Imager and Forward Scattering Spectrometer Probe (Laboratoire de Météorologie Physique, France) provided information on the microphysical properties, cloud particle size and shape, concentration, extinction, liquid and ice water content. In the thesis, a data set of four flights is analyzed and interpreted. The lidar observations served to detect atmospheric structures of interest, which were then probed by in situ technique. With this method, an optically subvisible ice cloud was characterized by the ensemble of instruments (10 April 2007). Radiative transfer simulations based on the lidar, radiation and in situ measurements allowed the calculation of the cloud forcing, amounting to -0.4 W m-2. This slight surface cooling is negligible on a local scale. However, thin Arctic clouds have been reported more frequently in winter time, when the clouds' effect on longwave radiation (a surface warming of 2.8 W m-2) is not balanced by the reduced shortwave radiation (surface cooling). Boundary layer mixed-phase clouds were analyzed for two days (8 and 9 April 2007). The typical structure consisting of a predominantly liquid water layer on cloud top and ice crystals below were confirmed by all instruments. The lidar observations were compared to European Centre for Medium-Range Weather Forecasts (ECMWF) meteorological analyses. A change of air masses along the flight track was evidenced in the airborne data by a small completely glaciated cloud part within the mixed-phase cloud system. This indicates that the updraft necessary for the formation of new cloud droplets at cloud top is disturbed by the mixing processes. The measurements served to quantify the shortcomings of the ECMWF model to describe mixed-phase clouds. As the partitioning of cloud condensate into liquid and ice water is done by a diagnostic equation based on temperature, the cloud structures consisting of a liquid cloud top layer and ice below could not be reproduced correctly. A small amount of liquid water was calculated for the lowest (and warmest) part of the cloud only. Further, the liquid water content was underestimated by an order of magnitude compared to in situ observations. The airborne lidar observations of 9 April 2007 were compared to space borne lidar data on board of the satellite Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). The systems agreed about the increase of cloud top height along the same flight track. However, during the time delay of 1 h between the lidar measurements, advection and cloud processing took place, and a detailed comparison of small-scale cloud structures was not possible. A double layer cloud at an altitude of 4 km was observed with lidar at the West coast in the direct vicinity of Svalbard (14 April 2007). The cloud system consisted of two geometrically thin liquid cloud layers (each 150 m thick) with ice below each layer. While the upper one was possibly formed by orographic lifting under the influence of westerly winds, or by the vertical wind shear shown by ECMWF analyses, the lower one might be the result of evaporating precipitation out of the upper layer. The existence of ice precipitation between the two layers supports the hypothesis that humidity released from evaporating precipitation was cooled and consequently condensed as it experienced the radiative cooling from the upper layer. In summary, a unique data set characterizing tropospheric Arctic clouds was collected with lidar, in situ and radiation instruments. The joint evaluation with meteorological analyses allowed a detailed insight in cloud properties, cloud evolution processes and radiative effects.
CHAMP (CHAllenging Minisatellite Payload) is a German small satellite mission to study the earth's gravity field, magnetic field and upper atmosphere. Thanks to the good condition of the satellite so far, the planned 5 years mission is extended to year 2009. The satellite provides continuously a large quantity of measurement data for the purpose of Earth study. The measurements of the magnetic field are undertaken by two Fluxgate Magnetometers (vector magnetometer) and one Overhauser Magnetometer (scalar magnetometer) flown on CHAMP. In order to ensure the quality of the data during the whole mission, the calibration of the magnetometers has to be performed routinely in orbit. The scalar magnetometer serves as the magnetic reference and its readings are compared with the readings of the vector magnetometer. The readings of the vector magnetometer are corrected by the parameters that are derived from this comparison, which is called the scalar calibration. In the routine processing, these calibration parameters are updated every 15 days by means of scalar calibration. There are also magnetic effects coming from the satellite which disturb the measurements. Most of them have been characterized during tests before launch. Among them are the remanent magnetization of the spacecraft and fields generated by currents. They are all considered to be constant over the mission life. The 8 years of operation experience allow us to investigate the long-term behaviors of the magnetometers and the satellite systems. According to the investigation, it was found that for example the scale factors of the FGM show obvious long-term changes which can be described by logarithmic functions. The other parameters (offsets and angles between the three components) can be considered constant. If these continuous parameters are applied for the FGM data processing, the disagreement between the OVM and the FGM readings is limited to \pm1nT over the whole mission. This demonstrates, the magnetometers on CHAMP exhibit a very good stability. However, the daily correction of the parameter Z component offset of the FGM improves the agreement between the magnetometers markedly. The Z component offset plays a very important role for the data quality. It exhibits a linear relationship with the standard deviation of the disagreement between the OVM and the FGM readings. After Z offset correction, the errors are limited to \pm0.5nT (equivalent to a standard deviation of 0.2nT). We improved the corrections of the spacecraft field which are not taken into account in the routine processing. Such disturbance field, e.g. from the power supply system of the satellite, show some systematic errors in the FGM data and are misinterpreted in 9-parameter calibration, which brings false local time related variation of the calibration parameters. These corrections are made by applying a mathematical model to the measured currents. This non-linear model is derived from an inversion technique. If the disturbance field of the satellite body are fully corrected, the standard deviation of scalar error \triangle B remains about 0.1nT. Additionally, in order to keep the OVM readings a reliable standard, the imperfect coefficients of the torquer current correction for the OVM are redetermined by solving a minimization problem. The temporal variation of the spacecraft remanent field is investigated. It was found that the average magnetic moment of the magneto-torquers reflects well the moment of the satellite. This allows for a continuous correction of the spacecraft field. The reasons for the possible unknown systemic error are discussed in this thesis. Particularly, both temperature uncertainties and time errors have influence on the FGM data. Based on the results of this thesis the data processing of future magnetic missions can be designed in an improved way. In particular, the upcoming ESA mission Swarm can take advantage of our findings and provide all the auxiliary measurements needed for a proper recovery of the ambient magnetic field.
Central stars of planetary nebulae are low-mass stars on the brink of their final evolution towards white dwarfs. Because of their surface temperature of above 25,000 K their UV radiation ionizes the surrounding material, which was ejected in an earlier phase of their evolution. Such fluorescent circumstellar gas is called a "Planetary Nebula". About one-tenth of the Galactic central stars are hydrogen-deficient. Generally, the surface of these central stars is a mixture of helium, carbon, and oxygen resulting from partial helium burning. Moreover, most of them have a strong stellar wind, similar to massive Pop-I Wolf-Rayet stars, and are in analogy classified as [WC]. The brackets distinguish the special type from the massive WC stars. Qualitative spectral analyses of [WC] stars lead to the assumption of an evolutionary sequence from the cooler, so-called late-type [WCL] stars to the very hot, early-type [WCE] stars. Quantitative analyses of the winds of [WC] stars became possible by means of computer programs that solve the radiative transfer in the co-moving frame, together with the statistical equilibrium equations for the population numbers. First analyses employing models without iron-line blanketing resulted in systematically different abundances for [WCL] and [WCE] stars. While the mass ratio of He:C is roughly 40:50 for [WCL] stars, it is 60:30 in average for [WCE] stars. The postulated evolution from [WCL] to [WCE] however could only lead to an increase of carbon, since heavier elements are built up by nuclear fusion. In the present work, improved models are used to re-analyze the [WCE] stars and to confirm their He:C abundance ratio. Refined models, calculated with the Potsdam WR model atmosphere code (PoWR), account now for line-blanketing due to iron group elements, small scale wind inhomogeneities, and complex model atoms for He, C, O, H, P, N, and Ne. Referring to stellar evolutionary models for the hydrogen-deficient [WC] stars, Ne and N abundances are of particular interest. Only one out of three different evolutionary channels, the VLTP scenario, leads to a Ne and N overabundance of a few percent by mass. A VLTP, a very late thermal pulse, is a rapid increase of the energy production of the helium-burning shell, while hydrogen burning has already ceased. Subsequently, the hydrogen envelope is mixed with deeper layers and completely burnt in the presence of C, He, and O. This results in the formation of N and Ne. A sample of eleven [WCE] stars has been analyzed. For three of them, PB 6, NGC 5189, and [S71d]3, a N overabundance of 1.5% has been found, while for three other [WCE] stars such high abundances of N can be excluded. In the case of NGC 5189, strong spectral lines of Ne can be reproduced qualitatively by our models. At present, the Ne mass fraction can only be roughly estimated from the Ne emission lines and seems to be in the order of a few percent by mass. Furthermore, using a diagnostic He-C line pair, the He:C abundance ratio of 60:30 for [WCE] stars is confirmed. Within the framework of the analysis, a new class of hydrogen-deficient central stars has been discovered, with PB 8 as its first member. Its atmospheric mixture resembles rather that of the massive WNL stars than of the [WC] stars. The determined mass fractions H:He:C:N:O are 40:55:1.3:2:1.3. As the wind of PB 8 contains significant amounts of O and C, in contrast to WN stars, a classification as [WN/WC] is suggested.
Computational cosmology
(2008)
“Computational Cosmology” is the modeling of structure formation in the Universe by means of numerical simulations. These simulations can be considered as the only “experiment” to verify theories of the origin and evolution of the Universe. Over the last 30 years great progress has been made in the development of computer codes that model the evolution of dark matter (as well as gas physics) on cosmic scales and new research discipline has established itself. After a brief summary of cosmology we will introduce the concepts behind such simulations. We further present a novel computer code for numerical simulations of cosmic structure formation that utilizes adaptive grids to efficiently distribute the work and focus the computing power to regions of interests, respectively. In that regards we also investigate various (numerical) effects that influence the credibility of these simulations and elaborate on the procedure of how to setup their initial conditions. And as running a simulation is only the first step to modelling cosmological structure formation we additionally developed an object finder that maps the density field onto galaxies and galaxy clusters and hence provides the link to observations. Despite the generally accepted success of the cold dark matter cosmology the model still inhibits a number of deviations from observations. Moreover, none of the putative dark matter particle candidates have yet been detected. Utilizing both the novel simulation code and the halo finder we perform and analyse various simulations of cosmic structure formation investigating alternative cosmologies. These include warm (rather than cold) dark matter, features in the power spectrum of the primordial density perturbations caused by non-standard inflation theories, and even modified Newtonian dynamics. We compare these alternatives to the currently accepted standard model and highlight the limitations on both sides; while those alternatives may cure some of the woes of the standard model they also inhibit difficulties on their own. During the past decade simulation codes and computer hardware have advanced to such a stage where it became possible to resolve in detail the sub-halo populations of dark matter halos in a cosmological context. These results, coupled with the simultaneous increase in observational data have opened up a whole new window on the concordance cosmogony in the field that is now known as “Near-Field Cosmology”. We will present an in-depth study of the dynamics of subhaloes and the development of debris of tidally disrupted satellite galaxies.1 Here we postulate a new population of subhaloes that once passed close to the centre of their host and now reside in the outer regions of it. We further show that interactions between satellites inside the radius of their hosts may not be negliable. And the recovery of host properties from the distribution and properties of tidally induced debris material is not as straightforward as expected from simulations of individual satellites in (semi-)analytical host potentials.
A huge number of applications require coherent radiation in the visible spectral range. Since diode lasers are very compact and efficient light sources, there exists a great interest to cover these applications with diode laser emission. Despite modern band gap engineering not all wavelengths can be accessed with diode laser radiation. Especially in the visible spectral range between 480 nm and 630 nm no emission from diode lasers is available, yet. Nonlinear frequency conversion of near-infrared radiation is a common way to generate coherent emission in the visible spectral range. However, radiation with extraordinary spatial temporal and spectral quality is required to pump frequency conversion. Broad area (BA) diode lasers are reliable high power light sources in the near-infrared spectral range. They belong to the most efficient coherent light sources with electro-optical efficiencies of more than 70%. Standard BA lasers are not suitable as pump lasers for frequency conversion because of their poor beam quality and spectral properties. For this purpose, tapered lasers and diode lasers with Bragg gratings are utilized. However, these new diode laser structures demand for additional manufacturing and assembling steps that makes their processing challenging and expensive. An alternative to BA diode lasers is the stripe-array architecture. The emitting area of a stripe-array diode laser is comparable to a BA device and the manufacturing of these arrays requires only one additional process step. Such a stripe-array consists of several narrow striped emitters realized with close proximity. Due to the overlap of the fields of neighboring emitters or the presence of leaky waves, a strong coupling between the emitters exists. As a consequence, the emission of such an array is characterized by a so called supermode. However, for the free running stripe-array mode competition between several supermodes occurs because of the lack of wavelength stabilization. This leads to power fluctuations, spectral instabilities and poor beam quality. Thus, it was necessary to study the emission properties of those stripe-arrays to find new concepts to realize an external synchronization of the emitters. The aim was to achieve stable longitudinal and transversal single mode operation with high output powers giving a brightness sufficient for efficient nonlinear frequency conversion. For this purpose a comprehensive analysis of the stripe-array devices was done here. The physical effects that are the origin of the emission characteristics were investigated theoretically and experimentally. In this context numerical models could be verified and extended. A good agreement between simulation and experiment was observed. One way to stabilize a specific supermode of an array is to operate it in an external cavity. Based on mathematical simulations and experimental work, it was possible to design novel external cavities to select a specific supermode and stabilize all emitters of the array at the same wavelength. This resulted in stable emission with 1 W output power, a narrow bandwidth in the range of 2 MHz and a very good beam quality with M²<1.5. This is a new level of brightness and brilliance compared to other BA and stripe-array diode laser systems. The emission from this external cavity diode laser (ECDL) satisfied the requirements for nonlinear frequency conversion. Furthermore, a huge improvement to existing concepts was made. In the next step newly available periodically poled crystals were used for second harmonic generation (SHG) in single pass setups. With the stripe-array ECDL as pump source, more than 140 mW of coherent radiation at 488 nm could be generated with a very high opto-optical conversion efficiency. The generated blue light had very good transversal and longitudinal properties and could be used to generate biphotons by parametric down-conversion. This was feasible because of the improvement made with the infrared stripe-array diode lasers due to the development of new physical concepts.
After the epoch of reionisation the intergalactic medium (IGM) is kept at a high photoionisation level by the cosmic UV background radiation field. Primarily composed of the integrated contribution of quasars and young star forming galaxies, its intensity is subject to spatial and temporal fluctuations. In particular in the vicinity of luminous quasars, the UV radiation intensity grows by several orders of magnitude. Due to an enhanced UV radiation up to a few Mpc from the quasar, the ionised hydrogen fraction significantly increases and becomes visible as a reduced level of absorption in the HI Lyman alpha (Ly-alpha) forest. This phenomenon is known as the proximity effect and it is the main focus of this thesis. Modelling the influence on the IGM of the quasar radiation, one is able to determine the UV background intensity at a specific frequency (J_nu_0), or equivalently, its photoionisation rate (Gamma_b). This is of crucial importance for both theoretical and observational cosmology. Thus far, the proximity effect has been investigated primarily by combining the signal of large samples of quasars, as it has been regarded as a statistical phenomenon. Only a handful of studies tried to measure its signature on individual lines of sight, albeit focusing on one sight line only. Our aim is to perform a systematic investigation of large samples of quasars searching for the signature of the proximity effect, with a particular emphasis on its detection on individual lines of sight. We begin this survey with a sample of 40 high resolution (R~45000), high signal to noise ratio (S/N~70) quasar spectra at redshift 2.1<z<4.7, publicly available in the European Southern Observatory (ESO) archive. The extraordinary quality of this data set enables us to detect the proximity effect signature not only in the combined quasar sample, but also along each individual sight line. This allows us to determine not only the UV background intensity at the mean redshift of this sample, but also to estimate its intensity in small (Delta z~0.2) redshift intervals in the range 2<z<4. Our estimates (J_nu_0~ 3x10^{-22} erg s^{-1} cm^{-2} Hz^{-1} sr^{-1}) are for the first time in very good agreement with different constraints of its evolution obtained from theoretical predictions and numerical simulations. We continue this systematic analysis of the proximity effect with the largest search to date invoking the Sloan Digital Sky Survey (SDSS) data set. The sample consists of 1733 quasars at redshifts z>2.3. In spite of the low resolution and limited S/N we detect the proximity effect on about 98\% of the quasars at a high significance level. Thereby we are able to determine the evolution of the UV background photoionisation rate within the redshift range 2<z<5 finding Gamma_b~ 1.6x10^{-12} s^{-1}. With these new measurements we explore literature estimates of the quasar luminosity function and predict the stellar luminosity density up to redshift of about z~5. Our results are globally in good agreement with recent determinations inferred from deep surveys of high redshift galaxies. We then compare our measurements on the UV background photoionisation rate inferred from the two samples at high and low resolution. While these data sets show extreme differences, our determinations are in considerable agreement at z<3.3, even though they show less agreement at higher redshifts. We suspect that this may be caused by either the small number of high resolution quasar spectra at the highest redshifts considered or by some systematic effect due to the limited data quality of SDSS. Complementary to the observational investigation of the proximity effect on high redshift quasars, we exploit some theoretical aspects linked to and based on the results on this phenomenon. We employ complex numerical simulations of structure formation to achieve a better representation of the Ly-alpha forest. Modelling the signature of the proximity effect on randomly selected sight lines, we prove the advantages of dealing with individual lines of sight instead of combining their signal to investigate this phenomenon. Furthermore, we develop and test novel techniques aimed at a more precise determination of the proximity effect signal. With this investigation we demonstrate that the technique developed and employed in this thesis is the most accurate adopted thus far. Tighter determinations of the UV background are certainly based on suitable methods to detect its signature, but also on a deeper understanding of the environments in which quasars form and evolve. We initiate an investigation of complex numerical simulations including the radiative transport of energy to model in a more detailed way the proximity effect. Such a simulation may lead to the characterisation of the quasar environment based on the comparison between the observed and simulated statistical properties of the proximity effect signature.
The presented thesis describes the observations of the Galactic center Quintuplet cluster, the spectral analysis of the cluster Wolf-Rayet stars of the nitrogen sequence to determine their fundamental stellar parameters, and discusses the obtained results in a general context. The Quintuplet cluster was discovered in one of the first infrared surveys of the Galactic center region (Okuda et al. 1987, 1989) and was observed for this project with the ESO-VLT near-infrared integral field instrument SINFONI-SPIFFI. The subsequent data reduction was performed in parts with a self-written pipeline to obtain flux-calibrated spectra of all objects detected in the imaged field of view. First results of the observation were compiled and published in a spectral catalog of 160 flux-calibrated $K$-band spectra in the range of 1.95 to 2.45\,$\mu$m, containing 85 early-type (OB) stars, 62 late-type (KM) stars, and 13 Wolf-Rayet stars. About 100 of these stars are cataloged for the first time. The main part of the thesis project was concentrated on the analysis of the WR stars of the nitrogen sequence and one further identified emission line star (Of/WN) with tailored Potsdam Wolf-Rayet (PoWR) models for expanding atmospheres (Hamann et al. 1995) that are applied to derive the stellar parameters of these stars. For this purpose, the atomic input data of the PoWR models had to be extended by further line transitions in the near-infrared spectral range to enable adaequate model spectra to be calculated. These models were then fitted to the observed spectra, revealing typical paramters for this class of stars. A significant amount of hydrogen of up to $X_\text{H} \sim 0.2$ by mass fraction is still present in their stellar atmospheres. The stars are also found to be very luminous ($\log{(L/L_\odot)} > 6.0$) and show mass-loss rates and wind characteristics typical for radiation-driven winds. By comparison with stellar evolutionary models (Meynet \& Maeder 2003a; Langer et al. 1994), the initial masses were estimated and indicate that the Quintuplet WN stars are descendants from the most massive O stars with $M_\text{init} > 60 M_\odot$ and their ages correspond to a cluster age of 3-5\,million years. The analysis of the individual WN stars revealed an average extinction of $A_K =3.1 \pm 0.5$\,mag ($A_V = 27 \pm 4$) towards the Quintuplet cluster. This extinction was applied to derive the stellar luminosities of the remaining early-type and late-type stars in the catalog and a Hertzsprung-Russell diagram could be compiled. Surprisingly, two stellar populations are found, a group of main sequence OB stars and a group of evolved late-type stars, i.e. red supergiants (RSG). The main sequence stars indicate a cluster age of 4 million years, which would be too young for red supergiants to be already present. A star formation event lasting for a few million years might possibly explain the Quintuplet's population and the cluster would still be considered coeval. However, the unexpected and simultaneous presence of red supergiants and Wolf-Rayet stars in the cluster points out that the details of star formation and cluster evolution are not yet well understood for the Quintuplet cluster.
We study buckling instabilities of filaments in biological systems. Filaments in a cell are the building blocks of the cytoskeleton. They are responsible for the mechanical stability of cells and play an important role in intracellular transport by molecular motors, which transport cargo such as organelles along cytoskeletal filaments. Filaments of the cytoskeleton are semiflexible polymers, i.e., their bending energy is comparable to the thermal energy such that they can be viewed as elastic rods on the nanometer scale, which exhibit pronounced thermal fluctuations. Like macroscopic elastic rods, filaments can undergo a mechanical buckling instability under a compressive load. In the first part of the thesis, we study how this buckling instability is affected by the pronounced thermal fluctuations of the filaments. In cells, compressive loads on filaments can be generated by molecular motors. This happens, for example, during cell division in the mitotic spindle. In the second part of the thesis, we investigate how the stochastic nature of such motor-generated forces influences the buckling behavior of filaments. In chapter 2 we review briefly the buckling instability problem of rods on the macroscopic scale and introduce an analytical model for buckling of filaments or elastic rods in two spatial dimensions in the presence of thermal fluctuations. We present an analytical treatment of the buckling instability in the presence of thermal fluctuations based on a renormalization-like procedure in terms of the non-linear sigma model where we integrate out short-wavelength fluctuations in order to obtain an effective theory for the mode of the longest wavelength governing the buckling instability. We calculate the resulting shift of the critical force by fluctuation effects and find that, in two spatial dimensions, thermal fluctuations increase this force. Furthermore, in the buckled state, thermal fluctuations lead to an increase in the mean projected length of the filament in the force direction. As a function of the contour length, the mean projected length exhibits a cusp at the buckling instability, which becomes rounded by thermal fluctuations. Our main result is the observation that a buckled filament is stretched by thermal fluctuations, i.e., its mean projected length in the direction of the applied force increases by thermal fluctuations. Our analytical results are confirmed by Monte Carlo simulations for buckling of semiflexible filaments in two spatial dimensions. We also perform Monte Carlo simulations in higher spatial dimensions and show that the increase in projected length by thermal fluctuations is less pronounced than in two dimensions and strongly depends on the choice of the boundary conditions. In the second part of this work, we present a model for buckling of semiflexible filaments under the action of molecular motors. We investigate a system in which a group of motors moves along a clamped filament carrying a second filament as a cargo. The cargo-filament is pushed against the wall and eventually buckles. The force-generating motors can stochastically unbind and rebind to the filament during the buckling process. We formulate a stochastic model of this system and calculate the mean first passage time for the unbinding of all linking motors which corresponds to the transition back to the unbuckled state of the cargo filament in a mean-field model. Our results show that for sufficiently short microtubules the movement of kinesin-I-motors is affected by the load force generated by the cargo filament. Our predictions could be tested in future experiments.
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
Classical semiconductor physics has been continuously improving electronic components such as diodes, light-emitting diodes, solar cells and transistors based on highly purified inorganic crystals over the past decades. Organic semiconductors, notably polymeric, are a comparatively young field of research, the first light-emitting diode based on conjugated polymers having been demonstrated in 1990. Polymeric semiconductors are of tremendous interest for high-volume, low-cost manufacturing ("printed electronics"). Due to their rather simple device structure mostly comprising only one or two functional layers, polymeric diodes are much more difficult to optimize compared to small-molecular organic devices. Usually, functions such as charge injection and transport are handled by the same material which thus needs to be highly optimized. The present work contributes to expanding the knowledge on the physical mechanisms determining device performance by analyzing the role of charge injection and transport on device efficiency for blue and white-emitting devices, based on commercially relevant spiro-linked polyfluorene derivatives. It is shown that such polymers can act as very efficient electron conductors and that interface effects such as charge trapping play the key role in determining the overall device efficiency. This work contributes to the knowledge of how charges drift through the polymer layer to finally find neutral emissive trap states and thus allows a quantitative prediction of the emission color of multichromophoric systems, compatible with the observed color shifts upon driving voltage and temperature variation as well as with electrical conditioning effects. In a more methodically oriented part, it is demonstrated that the transient device emission observed upon terminating the driving voltage can be used to monitor the decay of geminately-bound species as well as to determine trapped charge densities. This enables direct comparisons with numerical simulations based on the known properties of charge injection, transport and recombination. The method of charge extraction under linear increasing voltages (CELIV) is investigated in some detail, correcting for errors in the published approach and highlighting the role of non-idealized conditions typically present in experiments. An improved method is suggested to determine the field dependence of charge mobility in a more accurate way. Finally, it is shown that the neglect of charge recombination has led to a misunderstanding of experimental results in terms of a time-dependent mobility relaxation.
This thesis investigates the Casimir effect between plates made of normal and superconducting metals over a broad range of temperatures, as well as the Casimir-Polder interaction of an atom to such a surface. Numerical and asymptotical calculations have been the main tools in order to do so. The optical properties of the surfaces are described by dielectric functions or optical conductivities, which are reviewed for common models and have been analyzed with special weight on distributional properties and causality. The calculation of the Casimir energy between two normally conducting plates (cavity) is reviewed and previous work on the contribution to the Casimir energy due to the surface plasmons, present in all metallic cavities, has been generalized to finite temperatures for the first time. In the field of superconductivity, a new analytical continuation of the BCS conductivity to to purely imaginary frequencies has been obtained both inside and outside the extremely dirty limit of vanishing mean free path. The Casimir free energy calculated from this description was shown to coincide well with the values obtained from the two fluid model of superconductivity in certain regimes of the material parameters. The Casimir entropy in a superconducting cavity fulfills the third law of thermodynamics and features a characteristic discontinuity at the phase transition temperature. These effects were equally encountered in the Casimir-Polder interaction of an atom with a superconducting wall. The magnetic dipole coupling of an atom to a metal was shown to be highly sensible to dissipation and especially to the surface currents. This leads to a strong quenching of the magnetic Casimir-Polder energy at finite temperature. Violations of the third law of thermodynamics are encountered in special models, similar to phenomena in the Casimir-effect between two plates, that are debated controversely. None of these effects occurs in the analog electric dipole interaction. The results of this work suggest to reestablish the well-known plasma model as the low temperature limit of a superconductor as in London theory rather than use it for the description of normal metals. Superconductors offer the opportunity to control the dissipation of surface currents to a great extent. This could be used to access experimentally the low frequency optical response of metals, which is strongly connected to the thermal Casimir-effect. Here, differently from corresponding microwave experiments, energy and momentum are independent quantities. A measurement of the total Casimir-Polder interaction of atoms with superconductors seems to be in reach in today’s microchip-based atom-traps and the contribution due to magnetic coupling might be accessed by spectroscopic techniques
In this thesis, the properties of nonlinear disordered one dimensional lattices is investigated. Part I gives an introduction to the phenomenon of Anderson Localization, the Discrete Nonlinear Schroedinger Equation and its properties as well as the generalization of this model by introducing the nonlinear index α. In Part II, the spreading behavior of initially localized states in large, disordered chains due to nonlinearity is studied. Therefore, different methods to measure localization are discussed and the structural entropy as a measure for the peak structure of probability distributions is introduced. Finally, the spreading exponent for several nonlinear indices is determined numerically and compared with analytical approximations. Part III deals with the thermalization in short disordered chains. First, the term thermalization and its application to the system in use is explained. Then, results of numerical simulations on this topic are presented where the focus lies especially on the energy dependence of the thermalization properties. A connection with so-called breathers is drawn.
In normal everyday viewing, we perform large eye movements (saccades) and miniature or fixational eye movements. Most of our visual perception occurs while we are fixating. However, our eyes are perpetually in motion. Properties of these fixational eye movements, which are partly controlled by the brainstem, change depending on the task and the visual conditions. Currently, fixational eye movements are poorly understood because they serve the two contradictory functions of gaze stabilization and counteraction of retinal fatigue. In this dissertation, we investigate the spatial and temporal properties of time series of eye position acquired from participants staring at a tiny fixation dot or at a completely dark screen (with the instruction to fixate a remembered stimulus); these time series were acquired with high spatial and temporal resolution. First, we suggest an advanced algorithm to separate the slow phases (named drift) and fast phases (named microsaccades) of these movements, which are considered to play different roles in perception. On the basis of this identification, we investigate and compare the temporal scaling properties of the complete time series and those time series where the microsaccades are removed. For the time series obtained during fixations on a stimulus, we were able to show that they deviate from Brownian motion. On short time scales, eye movements are governed by persistent behavior and on a longer time scales, by anti-persistent behavior. The crossover point between these two regimes remains unchanged by the removal of microsaccades but is different in the horizontal and the vertical components of the eyes. Other analyses target the properties of the microsaccades, e.g., the rate and amplitude distributions, and we investigate, whether microsaccades are triggered dynamically, as a result of earlier events in the drift, or completely randomly. The results obtained from using a simple box-count measure contradict the hypothesis of a purely random generation of microsaccades (Poisson process). Second, we set up a model for the slow part of the fixational eye movements. The model is based on a delayed random walk approach within the velocity related equation, which allows us to use the data to determine control loop durations; these durations appear to be different for the vertical and horizontal components of the eye movements. The model is also motivated by the known physiological representation of saccade generation; the difference between horizontal and vertical components concurs with the spatially separated representation of saccade generating regions. Furthermore, the control loop durations in the model suggest an external feedback loop for the horizontal but not for the vertical component, which is consistent with the fact that an internal feedback loop in the neurophysiology has only been identified for the vertical component. Finally, we confirmed the scaling properties of the model by semi-analytical calculations. In conclusion, we were able to identify several properties of the different parts of fixational eye movements and propose a model approach that is in accordance with the described neurophysiology and described limitations of fixational eye movement control.
The Sun is a star, which due to its proximity has a tremendous influence on Earth. Since its very first days mankind tried to "understand the Sun", and especially in the 20th century science has uncovered many of the Sun's secrets by using high resolution observations and describing the Sun by means of models. As an active star the Sun's activity, as expressed in its magnetic cycle, is closely related to the sunspot numbers. Flares play a special role, because they release large energies on very short time scales. They are correlated with enhanced electromagnetic emissions all over the spectrum. Furthermore, flares are sources of energetic particles. Hard X-ray observations (e.g., by NASA's RHESSI spacecraft) reveal that a large fraction of the energy released during a flare is transferred into the kinetic energy of electrons. However the mechanism that accelerates a large number of electrons to high energies (beyond 20 keV) within fractions of a second is not understood yet. The thesis at hand presents a model for the generation of energetic electrons during flares that explains the electron acceleration based on real parameters obtained by real ground and space based observations. According to this model photospheric plasma flows build up electric potentials in the active regions in the photosphere. Usually these electric potentials are associated with electric currents closed within the photosphere. However as a result of magnetic reconnection, a magnetic connection between the regions of different magnetic polarity on the photosphere can establish through the corona. Due to the significantly higher electric conductivity in the corona, the photospheric electric power supply can be closed via the corona. Subsequently a high electric current is formed, which leads to the generation of hard X-ray radiation in the dense chromosphere. The previously described idea is modelled and investigated by means of electric circuits. For this the microscopic plasma parameters, the magnetic field geometry and hard X-ray observations are used to obtain parameters for modelling macroscopic electric components, such as electric resistors, which are connected with each other. This model demonstrates that such a coronal electric current is correlated with large scale electric fields, which can accelerate the electrons quickly up to relativistic energies. The results of these calculations are encouraging. The electron fluxes predicted by the model are in agreement with the electron fluxes deduced from the measured photon fluxes. Additionally the model developed in this thesis proposes a new way to understand the observed double footpoint hard X-ray sources.
Giant vesicles may contain several spatial compartments formed by phase separation within their enclosed aqueous solution. This phenomenon might be related to molecular crowding, fractionation and protein sorting in cells. To elucidate this process we used two chemically dissimilar polymers, polyethylene glycol (PEG) and dextran, encapsulated in giant vesicles. The dynamics of the phase separation of this polymer solution enclosed in vesicles is studied by concentration quench, i.e. exposing the vesicles to hypertonic solutions. The excess membrane area, produced by dehydration, can either form tubular structures (also known as tethers) or be utilized to perform morphological changes of the vesicle, depending on the interfacial tension between the coexisting phases and those between the membrane and the two phases. Membrane tube formation is coupled to the phase separation process. Apparently, the energy released from the phase separation is utilized to overcome the energy barrier for tube formation. The tubes may be absorbed at the interface to form a 2-demensional structure. The membrane stored in the form of tubes can be retracted under small tension perturbation. Furthermore, a wetting transition, which has been reported only in a few experimental systems, was discovered in this system. By increasing the polymer concentration, the PEG-rich phase changed from complete wetting to partial wetting of the membrane. If sufficient excess membrane area is available in the vesicle where both phases wet the membrane, one of the phases will bud off from the vesicle body, which leads to the separation of the two phases. This wetting-induced budding is governed by the surface energy and modulated by the membrane tension. This was demonstrated by micropipette aspiration experiments on vesicles encapsulating two phases. The budding of one phase can significantly decrease the surface energy by decreasing the contact area between the coexisting phases. The elasticity of the membrane allows it to adjust its tension automatically to balance the pulling force exerted by the interfacial tension of the two liquid phases at the three-phase contact line. The budding of the phase enriched with one polymer may be relevant to the selective protein transportation among lumens by means of vesicle in cells.
Supernovae are known to be the dominant energy source for driving turbulence in the interstellar medium. Yet, their effect on magnetic field amplification in spiral galaxies is still poorly understood. Analytical models based on the uncorrelated-ensemble approach predicted that any created field will be expelled from the disk before a significant amplification can occur. By means of direct simulations of supernova-driven turbulence, we demonstrate that this is not the case. Accounting for vertical stratification and galactic differential rotation, we find an exponential amplification of the mean field on timescales of 100Myr. The self-consistent numerical verification of such a “fast dynamo” is highly beneficial in explaining the observed strong magnetic fields in young galaxies. We, furthermore, highlight the importance of rotation in the generation of helicity by showing that a similar mechanism based on Cartesian shear does not lead to a sustained amplification of the mean magnetic field. This finding impressively confirms the classical picture of a dynamo based on cyclonic turbulence.
The aim of this thesis is to achieve a deep understanding of the working mechanism of polymer based solar cells and to improve the device performance. Two types of the polymer based solar cells are studied here: all-polymer solar cells comprising macromolecular donors and acceptors based on poly(p-phenylene vinylene) and hybrid cells comprising a PPV copolymer in combination with a novel small molecule electron acceptor. To understand the interplay between morphology and photovoltaic properties in all-polymer devices, I compared the photocurrent characteristics and excited state properties of bilayer and blend devices with different nano-morphology, which was fine tuned by using solvents with different boiling points. The main conclusion from these complementary measurements was that the performance-limiting step is the field-dependent generation of free charge carriers, while bimolecular recombination and charge extraction do not compromise device performance. These findings imply that the proper design of the donor-acceptor heterojunction is of major importance towards the goal of high photovoltaic efficiencies. Regarding polymer-small molecular hybrid solar cells I combined the hole-transporting polymer M3EH-PPV with a novel Vinazene-based electron acceptor. This molecule can be either deposited from solution or by thermal evaporation, allowing for a large variety of layer architectures to be realized. I then demonstrated that the layer architecture has a large influence on the photovoltaic properties. Solar cells with very high fill factors of up to 57 % and an open circuit voltage of 1V could be achieved by realizing a sharp and well-defined donor-acceptor heterojunction. In the past, fill factors exceeding 50 % have only been observed for polymers in combination with soluble fullerene-derivatives or nanocrystalline inorganic semiconductors as the electron-accepting component. The finding that proper processing of polymer-vinazene devices leads to similar high values is a major step towards the design of efficient polymer-based solar cells.
Microfabricated solid-state surfaces, also called atom chip', have become a well-established technique to trap and manipulate atoms. This has simplified applications in atom interferometry, quantum information processing, and studies of many-body systems. Magnetic trapping potentials with arbitrary geommetries are generated with atom chip by miniaturized current-carrying conductors integrated on a solid substrate. Atoms can be trapped and cooled to microKelvin and even nanoKelvin temperatures in such microchip trap. However, cold atoms can be significantly perturbed by the chip surface, typically held at room temperature. The magnetic field fluctuations generated by thermal currents in the chip elements may induce spin flips of atoms and result in loss, heating and decoherence. In this thesis, we extend previous work on spin flip rates induced by magnetic noise and consider the more complex geometries that are typically encountered in atom chips: layered structures and metallic wires of finite cross-section. We also discuss a few aspects of atom chips traps built with superconducting structures that have been suggested as a means to suppress magnetic field fluctuations. The thesis describes calculations of spin flip rates based on magnetic Green functions that are computed analytically and numerically. For a chip with a top metallic layer, the magnetic noise depends essentially on the thickness of that layer, as long as the layers below have a much smaller conductivity. Based on this result, scaling laws for loss rates above a thin metallic layer are derived. A good agreement with experiments is obtained in the regime where the atom-surface distance is comparable to the skin depth of metal. Since in the experiments, metallic layers are always etched to separate wires carrying different currents, the impact of the finite lateral wire size on the magnetic noise has been taken into account. The local spectrum of the magnetic field near a metallic microstructure has been investigated numerically with the help of boundary integral equations. The magnetic noise significantly depends on polarizations above flat wires with finite lateral width, in stark contrast to an infinitely wide wire. Correlations between multiple wires are also taken into account. In the last part, superconducting atom chips are considered. Magnetic traps generated by superconducting wires in the Meissner state and the mixed state are studied analytically by a conformal mapping method and also numerically. The properties of the traps created by superconducting wires are investigated and compared to normal conducting wires: they behave qualitatively quite similar and open a route to further trap miniaturization, due to the advantage of low magnetic noise. We discuss critical currents and fields for several geometries.
Recently, several faint ringlets in the Saturnian ring system were found to maintain a peculiar orientation relative to Sun. The Encke gap ringlets as well as the ringlet in the outer rift of the Cassini division were found to have distinct spatial displacements of several tens of kilometers away from Saturn towards Sun, referred to as heliotropicity (Hedman et al., 2007). This is quite exceptional, since dynamically one would expect eccentric features in the Saturnian rings to precess around Saturn over periods of months. In our study we address this exceptional behavior by investigating the dynamics of circumplanetary dust particles with sizes in the range of 1-100 µm. These small particles are perturbed by non-gravitational forces, in particular, solar radiation pres- sure, Lorentz force, and planetary oblateness, on time-scales of the order of days. The combined influences of these forces cause periodical evolutions of grains’ orbital ec- centricities as well as precession of their pericenters, which can be shown by secular perturbation theory. We show that this interaction results in a stationary eccentric ringlet, oriented with its apocenter towards the Sun, which is consistent with obser- vational findings. By applying this heliotropic dynamics to the central Encke gap ringlet, we can give a limit for the expected smallest grain size in the ringlet of about 8.7 microns, and constrain the minimal lifetime to lie in the order of months. Furthermore, our model matches fairly well the observed ringlet eccentricity in the Encke gap, which supports recent estimates on the size distribution of the ringlet material (Hedman et al., 2007). The ringlet-width however, that results from our modeling based on heliotropic dynamics, slightly overestimates the observed confined ringlet-width by a factor of 3 to 10, depending on the width-measure being used. This is indicative for mechanisms, not included in the heliotropic model, which potentially confine the ringlet to its observed width, including shepherding and scattering by embedded moonlets in the ringlet region. Based on these results, early investigations (Cuzzi et al., 1984, Spahn and Wiebicke, 1989, Spahn and Sponholz, 1989), and recent work that has been published on the F ring (Murray et al., 2008) - to which the Encke gap ringlets are found to share similar morphological structures - we model the maintenance of the central ringlet by embedded moonlets. These moonlets, believed to have sizes of hundreds of meters across, release material into space, which is eroded by micrometeoroid bombardment (Divine, 1993). We further argue that Pan - one of Saturn’s moons, which shares its orbit with the central ringlet of the Encke gap - is a rather weak source of ringlet material that efficiently confines the ringlet sources (moonlets) to move on horseshoe-like orbits. Moreover, we suppose that most of the narrow heliotropic ringlets are fed by a moonlet population, which is held together by the largest member to move on horseshoe-like orbits. Modeling the equilibrium between particle source and sinks with a primitive balance equation based on photometric observations (Porco et al., 2005), we find the minimal effective source mass of the order of 3 · 10-2MPan, which is needed to keep the central ringlet from disappearing.
Proteins are chain molecules built from amino acids. The precise sequence of the 20 different types of amino acids in a protein chain defines into which structure a protein folds, and the three-dimensional structure in turn specifies the biological function of the protein. The reliable folding of proteins is a prerequisite for their robust function. Misfolding can lead to protein aggregates that cause severe diseases, such as Alzheimer's, Parkinson's, or the variant Creutzfeldt-Jakob disease. Small single-domain proteins often fold without experimentally detectable metastable intermediate states. The folding dynamics of these proteins is thought to be governed by a single transition-state barrier between the unfolded and the folded state. The transition state is highly instable and cannot be observed directly. However, mutations in which a single amino acid of the protein is substituted by another one can provide indirect access. The mutations slightly change the transition-state barrier and, thus, the folding and unfolding times of the protein. The central question is how to reconstruct the transition state from the observed changes in folding times. In this habilitation thesis, a novel method to extract structural information on transition states from mutational data is presented. The method is based on (i) the cooperativity of structural elements such as alpha-helices and beta-hairpins, and (ii) on splitting up mutation-induced free-energy changes into components for these elements. By fitting few parameters, the method reveals the degree of structure formation of alpha-helices and beta-hairpins in the transition state. In addition, it is shown in this thesis that the folding routes of small single-domain proteins are dominated by loop-closure dependencies between the structural elements.
Die Arbeit beschreibt die Analyse von Beobachtungen zweier Sonnenflecken in zweidimensionaler Spektro-Polarimetrie. Die Daten wurden mit dem Fabry-Pérot-Interferometer der Universität Göttingen am Vakuum-Turm-Teleskop auf Teneriffa erfasst. Von der aktiven Region NOAA 9516 wurde der volle Stokes-Vektor des polarisierten Lichts in der Absorptionslinie bei 630,249 nm in Einzelaufnahmen beobachtet, und von der aktiven Region NOAA 9036 wurde bei 617,3 nm Wellenlänge eine 90-minütige Zeitserie des zirkular polarisierten Lichts aufgezeichnet. Aus den reduzierten Daten werden Ergebniswerte für Intensität, Geschwindigkeit in Beobachtungsrichtung, magnetische Feldstärke sowie verschiedene weitere Plasmaparameter abgeleitet. Mehrere Ansätze zur Inversion solarer Modellatmosphären werden angewendet und verglichen. Die teilweise erheblichen Fehlereinflüsse werden ausführlich diskutiert. Das Frequenzverhalten der Ergebnisse und Abhängigkeiten nach Ort und Zeit werden mit Hilfe der Fourier- und Wavelet-Transformation weiter analysiert. Als Resultat lässt sich die Existenz eines hochfrequenten Bandes für Geschwindigkeitsoszillationen mit einer zentralen Frequenz von 75 Sekunden (13 mHz) bestätigen. In größeren photosphärischen Höhen von etwa 500 km entstammt die Mehrheit der damit zusammenhängenden Schockwellen den dunklen Anteilen der Granulen, im Unterschied zu anderen Frequenzbereichen. Die 75-Sekunden-Oszillationen werden ebenfalls in der aktiven Region beobachtet, vor allem in der Lichtbrücke. In den identifizierten Bändern oszillatorischer Power der Geschwindigkeit sind in einer dunklen, penumbralen Struktur sowie in der Lichtbrücke ausgeprägte Strukturen erkennbar, die sich mit einer Horizontalgeschwindigkeit von 5-8 km/s in die ruhige Sonne bewegen. Diese zeigen einen deutlichen Anstieg der Power, vor allem im 5-Minuten-Band, und stehen möglicherweise in Zusammenhang mit dem Phänomen der „Evershed-clouds“. Eingeschränkt durch ein sehr geringes Signal-Rausch-Verhältnis und hohe Fehlereinflüsse werden auch Magnetfeldvariationen mit einer Periode von sechs Minuten am Übergang von Umbra zu Penumbra in der Nähe einer Lichtbrücke beobachtet. Um die beschriebenen Resultate zu erzielen, wurden bestehende Visualisierungsverfahren der Frequenzanalyse verbessert oder neu entwickelt, insbesondere für Ergebnisse der Wavelet-Transformation.
This thesis describes two main projects; the first one is the optimization of a hierarchical search strategy to search for unknown pulsars. This project is divided into two parts; the first part (and the main part) is the semi-coherent hierarchical optimization strategy. The second part is a coherent hierarchical optimization strategy which can be used in a project like Einstein@Home. In both strategies we have found that the 3-stages search is the optimum strategy to search for unknown pulsars. For the second project we have developed a computer software for a coherent Multi-IFO (Interferometer Observatory) search. To validate our software, we have worked on simulated data as well as hardware injected signals of pulsars in the fourth LIGO science run (S4). While with the current sensitivity of our detectors we do not expect to detect any true Gravitational Wave signals in our data, we can still set upper limits on the strength of the gravitational waves signals. These upper limits, in fact, tell us how weak a signal strength we would detect. We have also used our software to set upper limits on the signal strength of known isolated pulsars using LIGO fifth science run (S5) data.
The intergalactic medium is kept highly photoionised by the intergalactic UV background radiation field generated by the overall population of quasars and galaxies. In the vicinity of sources of UV photons, such as luminous high-redshift quasars, the UV radiation field is enhanced due to the local source contribution. The higher degree of ionisation is visible as a reduced line density or generally as a decreased level of absorption in the Lyman alpha forest of neutral hydrogen. This so-called proximity effect has been detected with high statistical significance towards luminous quasars. If quasars radiate rather isotropically, background quasar sightlines located near foreground quasars should show a region of decreased Lyman alpha absorption close to the foreground quasar. Despite considerable effort, such a transverse proximity effect has only been detected in a few cases. So far, studies of the transverse proximity effect were mostly limited by the small number of suitable projected pairs or groups of high-redshift quasars. With the aim to substantially increase the number of quasar groups in the vicinity of bright quasars we conduct a targeted survey for faint quasars around 18 well-studied quasars at employing slitless spectroscopy. Among the reduced and calibrated slitless spectra of 29000 objects on a total area of 4.39 square degrees we discover in total 169 previously unknown quasar candidates based on their prominent emission lines. 81 potential z>1.7 quasars are selected for confirmation by slit spectroscopy at the Very Large Telescope (VLT). We are able to confirm 80 of these. 64 of the newly discovered quasars reside at z>1.7. The high success rate of the follow-up observations implies that the majority of the remaining candidates are quasars as well. In 16 of these groups we search for a transverse proximity effect as a systematic underdensity in the HI Lyman alpha absorption. We employ a novel technique to characterise the random absorption fluctuations in the forest in order to estimate the significance of the transverse proximity effect. Neither low-resolution spectra nor high-resolution spectra of background quasars of our groups present evidence for a transverse proximity effect. However, via Monte Carlo simulations the effect should be detectable only at the 1-2sigma level near three of the foreground quasars. Thus, we cannot distinguish between the presence or absence of a weak signature of the transverse proximity effect. The systematic effects of quasar variability, quasar anisotopy and intrinsic overdensities near quasars likely explain the apparent lack of the transverse proximity effect. Even in absence of the systematic effects, we show that a statistically significant detection of the transverse proximity effect requires at least 5 medium-resolution quasar spectra of background quasars near foreground quasars whose UV flux exceeds the UV background by a factor 3. Therefore, statistical studies of the transverse proximity effect require large numbers of suitable pairs. Two sightlines towards the central quasars of our survey fields show intergalactic HeII Lyman alpha absorption. A comparison of the HeII absorption to the corresponding HI absorption yields an estimate of the spectral shape of the intergalactic UV radiation field, typically parameterised by the HeII/HI column density ratio eta. We analyse the fluctuating UV spectral shape on both lines of sight and correlate it with seven foreground quasars. On the line of sight towards Q0302-003 we find a harder radiation field near 4 foreground quasars. In the direct vicinity of the quasars eta is consistent with values of 25-100, whereas at large distances from the quasars eta>200 is required. The second line of sight towards HE2347-4342 probes lower redshifts where eta is directly measurable in the resolved HeII forest. Again we find that the radiation field near the 3 foreground quasars is significantly harder than in general. While eta still shows large fluctuations near the quasars, probably due to radiative transfer, the radiation field is on average harder near the quasars than far away from them. We interpret these discoveries as the first detections of the transverse proximity effect as a local hardness fluctuation in the UV spectral shape. No significant HI proximity effect is predicted for the 7 foreground quasars. In fact, the HI absorption near the quasars is close to or slightly above the average, suggesting that the weak signature of the transverse proximity effect is masked by intrinsic overdensities. However, we show that the UV spectral shape traces the transverse proximity effect even in overdense regions or at large distances. Therefore, the spectral hardness is a sensitive physical measure of the transverse proximity effect that is able to break the density degeneracy affecting the traditional searches.
In biological cells, the long-range intracellular traffic is powered by molecular motors which transport various cargos along microtubule filaments. The microtubules possess an intrinsic direction, having a 'plus' and a 'minus' end. Some molecular motors such as cytoplasmic dynein walk to the minus end, while others such as conventional kinesin walk to the plus end. Cells typically have an isopolar microtubule network. This is most pronounced in neuronal axons or fungal hyphae. In these long and thin tubular protrusions, the microtubules are arranged parallel to the tube axis with the minus ends pointing to the cell body and the plus ends pointing to the tip. In such a tubular compartment, transport by only one motor type leads to 'motor traffic jams'. Kinesin-driven cargos accumulate at the tip, while dynein-driven cargos accumulate near the cell body. We identify the relevant length scales and characterize the jamming behaviour in these tube geometries by using both Monte Carlo simulations and analytical calculations. A possible solution to this jamming problem is to transport cargos with a team of plus and a team of minus motors simultaneously, so that they can travel bidirectionally, as observed in cells. The presumably simplest mechanism for such bidirectional transport is provided by a 'tug-of-war' between the two motor teams which is governed by mechanical motor interactions only. We develop a stochastic tug-of-war model and study it with numerical and analytical calculations. We find a surprisingly complex cooperative motility behaviour. We compare our results to the available experimental data, which we reproduce qualitatively and quantitatively.
In the present dissertation paper an approach which ensures an efficient control of such diverse systems as noisy or chaotic oscillators and neural ensembles is developed. This approach is implemented by a simple linear feedback loop. The dissertation paper consists of two main parts. One part of the work is dedicated to the application of the suggested technique to a population of neurons with a goal to suppress their synchronous collective dynamics. The other part is aimed at investigating linear feedback control of coherence of a noisy or chaotic self-sustained oscillator. First we start with a problem of suppressing synchronization in a large population of interacting neurons. The importance of this task is based on the hypothesis that emergence of pathological brain activity in the case of Parkinson's disease and other neurological disorders is caused by synchrony of many thousands of neurons. The established therapy for the patients with such disorders is a permanent high-frequency electrical stimulation via the depth microelectrodes, called Deep Brain Stimulation (DBS). In spite of efficiency of such stimulation, it has several side effects and mechanisms underlying DBS remain unclear. In the present work an efficient and simple control technique is suggested. It is designed to ensure suppression of synchrony in a neural ensemble by a minimized stimulation that vanishes as soon as the tremor is suppressed. This vanishing-stimulation technique would be a useful tool of experimental neuroscience; on the other hand, control of collective dynamics in a large population of units represents an interesting physical problem. The main idea of suggested approach is related to the classical problem of oscillation theory, namely the interaction between a self-sustained (active) oscillator and a passive load (resonator). It is known that under certain conditions the passive oscillator can suppress the oscillations of an active one. In this thesis a much more complicated case of active medium, which itself consists of thousands of oscillators is considered. Coupling this medium to a specially designed passive oscillator, one can control the collective motion of the ensemble, specifically can enhance or suppress it. Having in mind a possible application in neuroscience, the problem of suppression is concentrated upon. Second, the efficiency of suggested suppression scheme is illustrated by considering more complex case, i.e. when the population of neurons generating the undesired rhythm consists of two non-overlapping subpopulations: the first one is affected by the stimulation, while the collective activity is registered from the second one. Generally speaking, the second population can be by itself both active and passive; both cases are considered here. The possible applications of suggested technique are discussed. Third, the influence of the external linear feedback on coherence of a noisy or chaotic self-sustained oscillator is considered. Coherence is one of the main properties of self-oscillating systems and plays a key role in the construction of clocks, electronic generators, lasers, etc. The coherence of a noisy limit cycle oscillator in the context of phase dynamics is evaluated by the phase diffusion constant, which is in its turn proportional to the width of the spectral peak of oscillations. Many chaotic oscillators can be described within the framework of phase dynamics, and, therefore, their coherence can be also quantified by the way of the phase diffusion constant. The analytical theory for a general linear feedback, considering noisy systems in the linear and Gaussian approximation is developed and validated by numerical results.
The mammalian brain is, with its numerous neural elements and structured complex connectivity, one of the most complex systems in nature. Recently, large-scale corticocortical connectivities, both structural and functional, have received a great deal of research attention, especially using the approach of complex networks. Here, we try to shed some light on the relationship between structural and functional connectivities by studying synchronization dynamics in a realistic anatomical network of cat cortical connectivity. We model the cortical areas by a subnetwork of interacting excitable neurons (multilevel model) and by a neural mass model (population model). With weak couplings, the multilevel model displays biologically plausible dynamics and the synchronization patterns reveal a hierarchical cluster organization in the network structure. We can identify a group of brain areas involved in multifunctional tasks by comparing the dynamical clusters to the topological communities of the network. With strong couplings of multilevel model and by using neural mass model, the dynamics are characterized by well-defined oscillations. The synchronization patterns are mainly determined by the node intensity (total input strengths of a node); the detailed network topology is of secondary importance. The biologically improved multilevel model exhibits similar dynamical patterns in the two regimes. Thus, the study of synchronization in a multilevel complex network model of cortex can provide insights into the relationship between network topology and functional organization of complex brain networks.
Wasserdampf in der Stratosphäre und Troposphäre ist eines der wichtigsten atmosphärischen Treibhausgase. Neben seiner Bedeutung für das Klima hat es großen Einfluss auf die Bildung von polaren stratosphärischen Wolken sowie auf die atmosphärische Chemie. Weltweit erstmalig soll innerhalb eines Forscherverbundes in Deutschland ein leistungsstarkes, mobiles, abtastendes Wasserdampf-DIAL zur dreidimensional hochaufgelösten Messung des atmosphärischen Wasserdampfs entwickelt werden. Mit dem Wasserdampf-DIAL können Wasserdampfkonzentrationen in der Atmosphäre mit hoher zeitlicher und räumlicher Auflösung gemessen werden. Das DIAL basiert auf einem Titan-Saphir-Laser oder einem dazu alternativen OPO-Laser (optisch parametrischer Oszillator). Der für das optische Pumpen dieser Laser nötige Pumplaser wurde im Rahmen dieser Arbeit in der Arbeitsgruppe Nichtlineare Optik des Instituts für Physik der Universität Potsdam entwickelt. Ein hochauflösendes, mobiles DIAL erfordert einen Pumplaser mit großen Pulsenergien, guter Strahlqualität und einer hohen Effizienz. Um diese Ziele zu erreichen, wurde ein MOPA-System (Master Oscillator Power Amplifier) mit Frequenzstabilisierung auf der Basis von doppelbrechungskompensierten, transversal diodengepumpten Laserstäben entwickelt und untersucht. Auf dem Weg dahin wurden unterschiedliche Realisierungsmöglichkeiten des MOPA-Systems geprüft. Im Rahmen dessen wurden die Festkörperlasermaterialien Yb:YAG [1], kerndotierte Nd:YAG-Keramik [2] und herkömmliches Nd:YAG vorgestellt und hinsichtlich ihrer Eignung für dieses MOPA-System untersucht. Nachdem die Entscheidung für Nd:YAG als laseraktives Material gefallen war, konnte darauf aufbauend die Konzeptionierung des Lasersystems auf der Basis von Verstärkungsrechnungen vorgenommen werden. Die entwickelte Verstärkungsrechnung trägt den Tatbeständen von realen Systemen Rechnung, indem radiusabhängige Intensitäten und eine radiale, nicht homogene Inversionsdichte berücksichtigt werden. Die Frequenzstabilisierung des gepulsten Oszillators (Frequenzstabilität von 1 MHz) wurde mittels des Pound-Drever-Hall-Verfahrens vorgenommen. Mit der Heterodynmethode wird die Frequenzstabilität des Oszillators gemessen. Nach Untersuchungen über verschiedene Konfigurationen für lineare und ringförmige Oszillatoren, wurde ein Ringoszillator mit zwei Laserköpfen aufgebaut, in welchen von außen mit einem Laser fester Frequenz eingestrahlt wird. Dieser emittiert bei einer Wiederholrate von 400 Hz eine Pulsenergie von Eout = 21 mJ bei nahezu beugungsbegrenzter Strahlqualität (M2 < 1,2). Die Verstärkung dieser Laserpulse erfolgte zunächst durch eine Vorverstärkerstufe und anschließend durch zwei doppelbrechungskompensierte Hauptverstärker im Doppeldurchgang. Eine gute Strahlqualität (M2 = 1,75) konnte unter anderem erzielt werden, indem der Doppeldurchgang durch die Hauptverstärker mit einem phasenkonjugierenden Spiegel (SF6), auf der Basis der stimulierten Brillouin Streuung, realisiert wurde. Der entwickelte Laser emittiert Pulse mit einer Länge von 25 ns und einer Energie von 250 mJ. Insgesamt wurde ein bisher einmaliges Lasersystem entwickelt. In der Literatur sind die erreichte Frequenzstabilität, Strahlqualität und Leistung in dieser Kombination bisher nicht dokumentiert. In der Zukunft soll durch den Einsatz von kerndotierten, keramischen Lasermaterialien, höheren Pumpleistungen der Hauptverstärker und phasenkonjugierenden Spiegeln aus Quarz die Pulsenergie des Systems weiter erhöht werden. [1] M. Ostermeyer, A. Straesser, “Theoretical investigation of Yb:YAG as laser material for nanosecond pulse emission with large energies in the joule range”, Optics Communications, Vol. 274, pp. 422-428 (2007) [2] A. Sträßer and M. Ostermeyer, “Improving the brightness of side pumped power amplifiers by using core doped ceramic rods”, Optics Express, Vol. 14, pp. 6687- 6693 (2006)
The interaction between neuronal cells can be identified as the computing mechanism of the brain. Neurons are complex cells that do not operate in isolation, but they are organized in a highly connected network structure. There is experimental evidence that groups of neurons dynamically synchronize their activity and process brain functions at all levels of complexity. A fundamental step to prove this hypothesis is to analyze large sets of single neurons recorded in parallel. Techniques to obtain these data are meanwhile available, but advancements are needed in the pre-processing of the large volumes of acquired data and in data analysis techniques. Major issues include extracting the signal of single neurons from the noisy recordings (referred to as spike sorting) and assessing the significance of the synchrony. This dissertation addresses these issues with two complementary strategies, both founded on the manipulation of point processes under rigorous analytical control. On the one hand I modeled the effect of spike sorting errors on correlated spike trains by corrupting them with realistic failures, and studied the corresponding impact on correlation analysis. The results show that correlations between multiple parallel spike trains are severely affected by spike sorting, especially by erroneously missing spikes. When this happens sorting strategies characterized by classifying only good'' spikes (conservative strategies) lead to less accurate results than tolerant'' strategies. On the other hand, I investigated the effectiveness of methods for assessing significance that create surrogate data by displacing spikes around their original position (referred to as dithering). I provide analytical expressions of the probability of coincidence detection after dithering. The effectiveness of spike dithering in creating surrogate data strongly depends on the dithering method and on the method of counting coincidences. Closed-form expressions and bounds are derived for the case where the dither equals the allowed coincidence interval. This work provides new insights into the methodologies of identifying synchrony in large-scale neuronal recordings, and of assessing its significance.
Atmospheric circulation and the surface mass balance in a regional climate model of Antarctica
(2007)
Understanding the Earth's climate system and particularly climate variability presents one of the most difficult and urgent challenges in science. The Antarctic plays a crucial role in the global climate system, since it is the principal region of radiative energy deficit and atmospheric cooling. An assessment of regional climate model HIRHAM is presented. The simulations are generated with the HIRHAM model, which is modified for Antarctic applications. With a horizontal resolution of 55km, the model has been run for the period 1958-1998 creating long-term simulations from initial and boundary conditions provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA40 re-analysis. The model output is compared with observations from observation stations, upper air data, global atmospheric analyses and satellite data. In comparison with the observations, the evaluation shows that the simulations with the HIRHAM model capture both the large and regional scale circulation features with generally small bias in the modeled variables. On the annual time scale the largest errors in the model simulations are the overestimation total cloud cover and the colder near-surface temperature over the interior of the Antarctic plateau. The low-level temperature inversion as well as low-level wind jet is well captured by the model. The decadal scale processes were studied based on trend calculations. The long-term run was divided into two 20 years parts. The 2m temperature, 500 hPa temperature, MSLP, precipitation and net mass balance trends were calculated for both periods and over 1958 - 1998. During the last two decades the strong surface cooling was observed over the Eastern Antarctica, this result is in good agreement with the result of Chapman and Walsh (2005) who calculated the temperature trend based on the observational data. The MSLP trend reveals a big disparity between the first and second parts of the 40 year run. The overall trend shows the strengthening of the circumpolar vortex and continental anticyclone. The net mass balance as well as precipitation show a positive trend over the Antarctic Peninsula region, along Wilkes Land and in Dronning Maud Land. The Antarctic ice sheet grows over the Eastern part of Antarctica with small exceptions in Dronning Maud Land and Wilkes Land and sinks in the Antarctic Peninsula; this result is in good agreement with the satellite-measured altitude presented in Davis (2005) . To better understand the horizontal structure of MSLP, temperature and net mass balance trends the influence of the Southern Annual Mode (SAM) on the Antarctic climate was investigated. The main meteorological parameters during the positive and negative Antarctic Oscillation (AAO) phases were compared to each other. A positive/negative AAO index means strengthening/weakening of the circumpolar vortex, poleward/northward storm tracks and prevailing/weakening westerly winds. For detailed investigation of global teleconnection, two positive and one negative periods of AAO phase were chosen. The differences in MSLP and 2m temperature between positive and negative AAO years during the winter months partly explain the surface cooling during the last decades.
Stellar winds play an important role for the evolution of massive stars and their cosmic environment. Multiple lines of evidence, coming from spectroscopy, polarimetry, variability, stellar ejecta, and hydrodynamic modeling, suggest that stellar winds are non-stationary and inhomogeneous. This is referred to as 'wind clumping'. The urgent need to understand this phenomenon is boosted by its far-reaching implications. Most importantly, all techniques to derive empirical mass-loss rates are more or less corrupted by wind clumping. Consequently, mass-loss rates are extremely uncertain. Within their range of uncertainty, completely different scenarios for the evolution of massive stars are obtained. Settling these questions for Galactic OB, LBV and Wolf-Rayet stars is prerequisite to understanding stellar clusters and galaxies, or predicting the properties of first-generation stars. In order to develop a consistent picture and understanding of clumped stellar winds, an international workshop on 'Clumping in Hot Star Winds' was held in Potsdam, Germany, from 18. - 22. June 2007. About 60 participants, comprising almost all leading experts in the field, gathered for one week of extensive exchange and discussion. The Scientific Organizing Committee (SOC) included John Brown (Glasgow), Joseph Cassinelli (Madison), Paul Crowther (Sheffield), Alex Fullerton (Baltimore), Wolf-Rainer Hamann (Potsdam, chair), Anthony Moffat (Montreal), Stan Owocki (Newark), and Joachim Puls (Munich). These proceedings contain the invited and contributed talks presented at the workshop, and document the extensive discussions.
Electron transfer phenomena in proteins represent one of the most common types of biochemical reactions. They play a central role in energy conversion pathways in living cells, and are crucial components in respiration and photosynthesis. These complex biochemical reaction cascades consist of a series of proteins and protein complexes that couple a charge transfer to different forms of chemical energy. The efficiency and sophisticated optimisation of signal transfer in these natural redox chains has inspired engineering of artificial architectures mimicking essential properties of their natural analogues. Implementation of direct electron transfer (DET) in protein assemblies was a breakthrough in bioelectronics, providing a simple and efficient way for coupling biological recognition events to a signal transducer. DET avoids the use of redox mediators, reducing potential interferences and side reactions, as well as being more compatible with in vivo conditions. However, only a few haem proteins, including the redox protein cytochrome c (cyt.c), and blue copper enzymes show efficient DET on different kinds of electrodes. Previous investigations with cyt.c have mainly focused on heterogeneous electron transfer of monolayers of this protein on gold. An important advance was the fabrication of cyt.c multilayers by electrostatic layer-by-layer self-assembly. The ease of fabrication, the stability, and the controllable permeability of polyelectrolyte multilayers have made them particularly attractive for electroanalytical applications. With cyt.c and sulfonated polyaniline it was for the first time possible that fully electro-active multilayers of the redox protein could be prepared. This approach was extended to design an analytical signal chain based on multilayers of cyt.c and xanthine oxidase (XOD). The system does not need an external mediator but relies on an in situ generation of a mediating radical and thus allows a signal transfer from hypoxanthine via the substrate converting enzyme and cyt.c to the electrode. Another kind of a signal chain is based on assembling proteins in complexes on electrodes in such a way that a direct protein-protein electron transfer becomes feasible. This design does not need a redox mediator in analogy to natural protein communication. For this purpose, cyt.c and the enzyme bilirubin oxidase (BOD, EC 1.3.3.5) are co-immobilized in a self-assembled polyelectrolyte multilayer on gold electrodes. Although these two proteins are not natural reaction partners, the protein architecture facilitates an electron transfer from the electrode via multiple protein layers to molecular oxygen resulting in a significant catalytic reduction current. Finally, we describe a novel strategy for multi-protein layer-by-layer self-assembly combining cyt.c with an enzyme sulfite oxidase (SOx) without use of any additional polymer. Electrostatic interactions between these two proteins with rather separated pI values during the assembly process from a low ionic strength buffer were found sufficient for the layer-by-layer deposition of the both biomolecules. It is anticipated that the concepts described in this work will stimulate further progress in multilayer design of even more complex biomimetic signal cascades taking advantage of direct communication between proteins.
Giacconi et al. (1962) discovered a diffuse cosmic X-ray background with rocket experiments when they searched for lunar X-ray emission. Later satellite missions found a spectral peak in the cosmic X-ray background at ~30 keV. Imaging X-ray satellites such as ROSAT (1990-1999) were able to resolve up to 80% of the background below 2 keV into single point sources, mainly active galaxies. The cosmic X-ray background is the integration of all accreting super-massive (several million solar masses) black holes in the centre of active galaxies over cosmic time. Synthesis models need further populations of X-ray absorbed active galaxy nuclei (AGN) in order to explain the cosmic X-ray background peak at ~30 keV. Current X-ray missions such as XMM-Newton and Chandra offer the possibility of studying these additional populations. This Ph.D. thesis studies the populations that dominate the X-ray sky. For this purpose the 120 ksec XMM-Newton Marano field survey, named for an earlier optical quasar survey in the southern hemisphere, is analysed. Based on the optical follow-up observations the X-ray sources are spectroscopically classified. Optical and X-ray properties of the different X-ray source populations are studied and differences are derived. The amount of absorption in the X-ray spectra of type II AGN, which are considered as a main contributor to the X-ray background at ~30 keV, is determined. In order to extend the sample size of the rare type II AGN, this study also includes objects from another survey, the XMM-Newton Serendipitous Medium Sample. In addition, the dependence of the absorption in type II AGN with redshift and X-ray luminosity is analysed. We detected 328 X-ray sources in the Marano field. 140 sources were spectroscopically classified. We found 89 type I AGN, 36 type II AGN, 6 galaxies, and 9 stars. AGN, galaxies, and stars are clearly distinguishable by their optical and X-ray properties. Type I and II AGN do not separate clearly. They have a significant overlap in all studied properties. In a few cases the X-ray properties are in contradiction to the observed optical properties for type I and type II AGN. For example we find type II AGN that show evidence for optical absorption but are not absorbed in X-rays. Based on the additional use of near infra-red imaging (K-band), we were able to identify several of the rare type II AGN. The X-ray spectra of type II AGN from the XMM-Newton Marano field survey and the XMM-Newton Serendipitous Medium Sample were analysed. Since most of the sources have only ~40 X-ray counts in the XMM-Newton PN-detector, I carefully studied the fit results of simulated X-ray spectra as a function of fit statistic and binning method. The objects revealed only moderate absorption. In particular, I do not find any Compton-thick sources (absorbed by column densities of NH > 1.5 x 10^24 cm^−2). This gives evidence that type II AGN are not the main contributor of the X-ray background around 30 keV. Although bias effects may occur, type II AGN show no noticeable trend of the amount of absorption with redshift or X-ray luminosity.
Im Rahmen der vorliegenden Arbeit ist es erstmals gelungen, mit einem ps-Pumplaser (10 ps) Weißlicht mit einer spektralen Breite von mehr als einer optischen Oktave in einer mikrostrukturierten Faser (MSF) bei einer Pumpwellenlänge von 1064 nm zu generieren. Es ließ sich, abgesehen von nichtkonvertierten Resten der Pumpstrahlung, ein unstrukturiertes und zeitlich stabiles Weißlichtspektrum von 700 nm bis 1650 nm generieren. Die maximale Ausgangsleistung dieser Weißlichtstrahlung betrug 3,1 W. Es konnten sehr gute Einkoppeleffizienzen von maximal 62 % erzielt werden. Die an der Weißlichterzeugung beteiligten dispersiven und nichtlinear optischen Effekte, wie z.B. Selbstphasenmodulation, Vierwellenmischung, Modulationsinstabilitäten oder Solitoneneffekte, werden detailliert theoretisch untersucht und erläutert. Die Arbeit beinhaltet ebenfalls eine umfangreiche Beschreibung der Wirkungsweise und Eigenschaften von mikrostrukturierten Fasern mit einem festen Faserkern. Aufgrund der großen Variationsvielfalt des mikrostrukturierten Fasermantels und der damit verbundenen Wellenleitereigenschaften ergeben sich, insbesondere für die Anwendung in der nichtlinearen Optik, eine Reihe von interessanten Eigenschaften. Es wurden insgesamt vier verschiedene mikrostrukturierte Fasern experimentell untersucht. Für die Interpretation der experimentellen Ergebnisse ist die Pulsausbreitung der ps-Pumppulse in einer dispersiven, nichtlinear optischen Faser anhand der verallgemeinerten nichtlinearen Schrödinger-Gleichung berechnet worden. Durch einen Vergleich der Berechnungen mit den Messdaten ließen sich verstärkte Modulationsinstabilitäten und verschiedene Solitoneneffekte als hauptsächlich für die Weißlichterzeugung bei ps-Anregungspulsen verantwortlich identifizieren. Auf der Basis der durchgeführten Untersuchungen wurde in Kooperation mit der Fa. Jenoptik Laser, Optik, Systeme GmbH eine kompakte und leistungsstarke Weißlichtquelle entwickelt. Diese wurde erfolgreich in einer Kohärenztomographiemessung (Optical Coherence Tomography - OCT) getestet: Es konnte in ex vivo-Untersuchungen gezeigt werden, dass sich mit dieser ps-Weißlichtquelle eine hohe Eindringtiefe von ca. 400 µm in die Netzhaut eines Affen erreichen lässt.
In dieser Arbeit wurde die Variabilität der Atmosphäre in einem neuen gekoppelten Klimamodell (ECHO-GiSP) untersucht, welches eine vereinfachte Stratosphärenchemie (bis 80 km Höhe) enthält. Es wurden 2 Simulationen über 150 Jahre durchgeführt. In einer der Simulationen wurde die atmosphärische Chemie modelliert, hatte aber keinen Einfluß auf die Dynamik des Klimamodelles. In der zweiten Simulation wurde hingegen die Wirkung der Chemie auf die Klimadynamik explizit berücksichtigt, die über die Strahlungsbilanz des Modelles erfolgt. Dies ist die erste Langzeitsimulation mit einem voll gekoppelten globalen Klimamodell mit interaktiver Chemie. Die Simulation mit rückgekoppelter Chemie zeigt eine Abschwächung des atmosphärischen Variabilitätsmusters der Arktischen Oszillation (AO). Zudem kommt es in der Troposphäre zu einer Reduzierung der mittleren Windgeschwindigkeiten der gemäßigten Breiten aufgrund verringerter Temperaturgegensätze zwischen den Tropen und den Polargebieten. Auch in der Stratosphäre ergibt sich eine Abschwächung und Erwärmung des Polarwirbels. Diese Auswirkungen der Kopplung zwischen der atmosphärischen Chemie und der Dynamik des Klimamodelles sind eine wichtige Erkenntnis, da in früheren Klimasimulationen die Variabilität der AO oft zu stark ausgeprägt war. In der Stratosphäre reduziert sich infolge des abgeschwächten Polarwirbels auch die großräumige Zirkulation zwischen den beiden Hemisphären der Erde. In der Troposphäre werden hingegen die allgemeine Zirkulation, und damit auch die subtropischen Strahlströme des Windes verstärkt. Zudem kommt es in den Tropen zu Temperaturänderungen durch stratosphärische Ozonschwankungen in Abhängigkeit von der AO. Allgemein verändert sich die Kopplung zwischen Troposphäre und Stratosphäre, einschließlich des durch die Anregung von langen atmosphärischen Wellen erfolgenden vertikalen Energieübertrages aus der Troposphäre in die Stratosphäre.
In this work, some new results to exploit the recurrence properties of quasiperiodic dynamical systems are presented by means of a two dimensional visualization technique, Recurrence Plots(RPs). Quasiperiodicity is the simplest form of dynamics exhibiting nontrivial recurrences, which are common in many nonlinear systems. The concept of recurrence was introduced to study the restricted three body problem and it is very useful for the characterization of nonlinear systems. I have analyzed in detail the recurrence patterns of systems with quasiperiodic dynamics both analytically and numerically. Based on a theoretical analysis, I have proposed a new procedure to distinguish quasiperiodic dynamics from chaos. This algorithm is particular useful in the analysis of short time series. Furthermore, this approach demonstrates to be efficient in recognizing regular and chaotic trajectories of dynamical systems with mixed phase space. Regarding the application to real situations, I have shown the capability and validity of this method by analyzing time series from fluid experiments.
In der vorliegenden Arbeit werden Methoden der Erdsystemanalyse auf die Untersuchung der Habitabilität terrestrischer Exoplaneten angewandt. Mit Hilfe eines parametrisierten Konvektionsmodells für die Erde wird die thermische Evolution von terrestrischen Planeten berechnet. Bei zunehmender Leuchtkraft des Zentralsterns wird über den globalen Karbonat-Silikat-Kreislauf das planetare Klima stabilisiert. Für eine photosynthetisch-aktive Biosphäre, die in einem bestimmten Temperaturbereich bei hinreichender CO2-Konzentration existieren kann, wird eine Überlebenspanne abgeschätzt. Der Abstandsbereich um einen Stern, in dem eine solche Biosphäre produktiv ist, wird als photosynthetisch-aktive habitable Zone (pHZ) definiert und berechnet. Der Zeitpunkt, zu dem die pHZ in einem extrasolaren Planetensystem endgültig verschwindet, ist die maximale Lebenspanne der Biosphäre. Für Supererden, massereiche terrestrische Planeten, ist sie umso länger, je massereicher der Planet ist und umso kürzer, je mehr er mit Kontinenten bedeckt ist. Für Supererden, die keine ausgeprägten Wasser- oder Landwelten sind, skaliert die maximale Lebenspanne mit der Planetenmasse mit einem Exponenten von 0,14. Um K- und M-Sterne ist die Überlebensspanne einer Biosphäre auf einem Planeten immer durch die maximale Lebensspanne bestimmt und nicht durch das Ende der Hauptreihenentwicklung des Zentralsterns limitiert. Das pHZ-Konzept wird auf das extrasolare Planetensystem Gliese 581 angewandt. Danach könnte die 8-Erdmassen-Supererde Gliese 581d habitabel sein. Basierend auf dem vorgestellten pHZ-Konzept wird erstmals die von Ward und Brownlee 1999 aufgestellte Rare-Earth-Hypothese für die Milchstraße quantifiziert. Diese Hypothese besagt, dass komplexes Leben im Universum vermutlich sehr selten ist, wohingegen primitives Leben weit verbreitet sein könnte. Unterschiedliche Temperatur- und CO2-Toleranzen sowie ein unterschiedlicher Einfluss auf die Verwitterung für komplexe und primitive Lebensformen führt zu unterschiedlichen Grenzen der pHZ und zu einer unterschiedlichen Abschätzung für die Anzahl der Planeten, die mit den entsprechenden Lebensformen besiedelt sein könnten. Dabei ergibt sich, dass komplex besiedelte Planeten heute etwa 100-mal seltener sein müssten als primitiv besiedelte.
The biological function and the technological applications of semiflexible polymers, such as DNA, actin filaments and carbon nanotubes, strongly depend on their rigidity. Semiflexible polymers are characterized by their persistence length, the definition of which is the subject of the first part of this thesis. Attractive interactions, that arise e.g.~in the adsorption, the condensation and the bundling of filaments, can change the conformation of a semiflexible polymer. The conformation depends on the relative magnitude of the material parameters and can be influenced by them in a systematic manner. In particular, the morphologies of semiflexible polymer rings, such as circular nanotubes or DNA, which are adsorbed onto substrates with three types of structures, are studied: (i) A topographical channel, (ii) a chemically modified stripe and (iii) a periodic pattern of topographical steps. The results are compared with the condensation of rings by attractive interactions. Furthermore, the bundling of two individual actin filaments, whose ends are anchored, is analyzed. This system geometry is shown to provide a systematic and quantitative method to extract the magnitude of the attraction between the filaments from experimentally observable conformations of the filaments.
The role played by azobenzene polymers in the modern photonic, electronic and opto-mechanical applications cannot be underestimated. These polymers are successfully used to produce alignment layers for liquid crystalline fluorescent polymers in the display and semiconductor technology, to build waveguides and waveguide couplers, as data storage media and as labels in quality product protection. A very hot topic in modern research are light-driven artificial muscles based on azobenzene elastomers. The incorporation of azobenzene chromophores into polymer systems via covalent bonding or even by blending gives rise to a number of unusual effects under visible (VIS) and ultraviolet light irradiation. The most amazing effect is the inscription of surface relief gratings (SRGs) onto thin azobenzene polymer films. At least seven models have been proposed to explain the origin of the inscribing force but none of them describes satisfactorily the light induced material transport on the molecular level. In most models, to explain the mass transport over micrometer distances during irradiation at room temperature, it is necessary to assume a considerable degree of photoinduced softening, at least comparable with that at the glass transition. Contrary to this assumption, we have gathered a convincing evidence that there is no considerable softening of the azobenzene layers under illumination. Presently we can surely say that light induced softening is a very weak accompanying effect rather than a necessary condition for the formation of SRGs. This means that the inscribing force should be above the yield point of the azobenzene polymer. Hence, an appropriate approach to describe the formation and relaxation of SRGs is a viscoplastic theory. It was used to reproduce pulse-like inscription of SRGs as measured by VIS light scattering. At longer inscription times the VIS scattering pattern exhibits some peculiarities which can be explained by the appearance of a density grating that will be shown to arise due to the final compressibility of the polymer film. As a logical consequence of the aforementioned research, a thermodynamic theory explaining the light-induced deformation of free standing films and the formation of SRGs is proposed. The basic idea of this theory is that under homogeneous illumination an initially isotropic sample should stretch itself along the polarization direction to compensate the entropy decrease produced by the photoinduced reorientation of azobenzene chromophores. Finally, some ideas about further development of this controversial topic will be discussed.