Refine
Has Fulltext
- yes (406) (remove)
Year of publication
Document Type
- Doctoral Thesis (406) (remove)
Keywords
- Synchronisation (17)
- Nichtlineare Dynamik (10)
- synchronization (10)
- Klimawandel (9)
- data analysis (9)
- Datenanalyse (8)
- Spektroskopie (8)
- Synchronization (8)
- Arktis (7)
- Astrophysik (7)
Institute
- Institut für Physik und Astronomie (406) (remove)
This thesis rests on two large Active Galactic Nuclei (AGNs) surveys. The first survey deals with galaxies that host low-level AGNs (LLAGN) and aims at identifying such galaxies by quantifying their variability. While numerous studies have shown that AGNs can be variable at all wavelengths, the nature of the variability is still not well understood. Studying the properties of LLAGNs may help to understand better galaxy evolution, and how AGNs transit between active and inactive states. In this thesis, we develop a method to extract variability properties of AGNs. Using multi-epoch deep photometric observations, we subtract the contribution of the host galaxy at each epoch to extract variability and estimate AGN accretion rates. This pipeline will be a powerful tool in connection with future deep surveys such as PANSTARS. The second study in this thesis describes a survey of X-ray selected AGN hosts at redshifts z>1.5 and compares them to quiescent galaxies. This survey aims at studying environments, sizes and morphologies of star-forming high-redshift AGN hosts in the COSMOS Survey at the epoch of peak AGN activity. Between redshifts 1.5<z<3.8, the COSMOS HST/ACS imaging probes the UV regime, where separating the AGN flux from its host galaxy is very challenging. Nevertheless, we successfully derived the structural properties of 249 AGN hosts using two-dimensional surface-brightness profile fitting with the GALFIT package. This is the largest sample of AGN hosts at redshift z>1.5 to date. We analyzed the evolution of structural parameters of AGN and non-AGN host galaxies with redshift, and compared their disturbance rates to identify the more probable AGN triggering mechanism in the 43.5<log_10 L_X<45 luminosity range. We also conducted mock AGN and quiescent galaxies observations to determine errors and corrections for the derived parameters. We find that the size-absolute magnitude relations of AGN hosts and non-AGN galaxies are very similar, with estimated mean sizes in both samples decreasing by ~50% between redshifts z=1.5 and z=3.5. Morphological classification of both active and quiescent galaxies shows that the majority of the AGN host galaxies are disc-dominated, with disturbance rates that are significantly lower than among the non-AGN galaxies. Such a finding suggests that Major Mergers are probably not responsible for triggering AGN accretion in most of these galaxies. Other secular mechanisms should therefore be responsible.
Estimation of the self-similarity exponent has attracted growing interest in recent decades and became a research subject in various fields and disciplines. Real-world data exhibiting self-similar behavior and/or parametrized by self-similarity exponent (in particular Hurst exponent) have been collected in different fields ranging from finance and human sciencies to hydrologic and traffic networks. Such rich classes of possible applications obligates researchers to investigate qualitatively new methods for estimation of the self-similarity exponent as well as identification of long-range dependencies (or long memory). In this thesis I present the Bayesian estimation of the Hurst exponent. In contrast to previous methods, the Bayesian approach allows the possibility to calculate the point estimator and confidence intervals at the same time, bringing significant advantages in data-analysis as discussed in this thesis. Moreover, it is also applicable to short data and unevenly sampled data, thus broadening the range of systems where the estimation of the Hurst exponent is possible. Taking into account that one of the substantial classes of great interest in modeling is the class of Gaussian self-similar processes, this thesis considers the realizations of the processes of fractional Brownian motion and fractional Gaussian noise. Additionally, applications to real-world data, such as the data of water level of the Nile River and fixational eye movements are also discussed.
Actin is one of the most abundant and highly conserved proteins in eukaryotic cells. The globular protein assembles into long filaments, which form a variety of different networks within the cytoskeleton. The dynamic reorganization of these networks - which is pivotal for cell motility, cell adhesion, and cell division - is based on cycles of polymerization (assembly) and depolymerization (disassembly) of actin filaments. Actin binds ATP and within the filament, actin-bound ATP is hydrolyzed into ADP on a time scale of a few minutes. As ADP-actin dissociates faster from the filament ends than ATP-actin, the filament becomes less stable as it grows older. Recent single filament experiments, where abrupt dynamical changes during filament depolymerization have been observed, suggest the opposite behavior, however, namely that the actin filaments become increasingly stable with time. Several mechanisms for this stabilization have been proposed, ranging from structural transitions of the whole filament to surface attachment of the filament ends. The key issue of this thesis is to elucidate the unexpected interruptions of depolymerization by a combination of experimental and theoretical studies. In new depolymerization experiments on single filaments, we confirm that filaments cease to shrink in an abrupt manner and determine the time from the initiation of depolymerization until the occurrence of the first interruption. This duration differs from filament to filament and represents a stochastic variable. We consider various hypothetical mechanisms that may cause the observed interruptions. These mechanisms cannot be distinguished directly, but they give rise to distinct distributions of the time until the first interruption, which we compute by modeling the underlying stochastic processes. A comparison with the measured distribution reveals that the sudden truncation of the shrinkage process neither arises from blocking of the ends nor from a collective transition of the whole filament. Instead, we predict a local transition process occurring at random sites within the filament. The combination of additional experimental findings and our theoretical approach confirms the notion of a local transition mechanism and identifies the transition as the photo-induced formation of an actin dimer within the filaments. Unlabeled actin filaments do not exhibit pauses, which implies that, in vivo, older filaments become destabilized by ATP hydrolysis. This destabilization can be identified with an acceleration of the depolymerization prior to the interruption. In the final part of this thesis, we theoretically analyze this acceleration to infer the mechanism of ATP hydrolysis. We show that the rate of ATP hydrolysis is constant within the filament, corresponding to a random as opposed to a vectorial hydrolysis mechanism.
This work investigates diffusion in nonlinear Hamiltonian systems. The diffusion, more precisely subdiffusion, in such systems is induced by the intrinsic chaotic behavior of trajectories and thus is called chaotic diffusion''. Its properties are studied on the example of one- or two-dimensional lattices of harmonic or nonlinear oscillators with nearest neighbor couplings. The fundamental observation is the spreading of energy for localized initial conditions. Methods of quantifying this spreading behavior are presented, including a new quantity called excitation time. This new quantity allows for a more precise analysis of the spreading than traditional methods. Furthermore, the nonlinear diffusion equation is introduced as a phenomenologic description of the spreading process and a number of predictions on the density dependence of the spreading are drawn from this equation. Two mathematical techniques for analyzing nonlinear Hamiltonian systems are introduced. The first one is based on a scaling analysis of the Hamiltonian equations and the results are related to similar scaling properties of the NDE. From this relation, exact spreading predictions are deduced. Secondly, the microscopic dynamics at the edge of spreading states are thoroughly analyzed, which again suggests a scaling behavior that can be related to the NDE. Such a microscopic treatment of chaotically spreading states in nonlinear Hamiltonian systems has not been done before and the results present a new technique of connecting microscopic dynamics with macroscopic descriptions like the nonlinear diffusion equation. All theoretical results are supported by heavy numerical simulations, partly obtained on one of Europe's fastest supercomputers located in Bologna, Italy. In the end, the highly interesting case of harmonic oscillators with random frequencies and nonlinear coupling is studied, which resembles to some extent the famous Discrete Anderson Nonlinear Schroedinger Equation. For this model, a deviation from the widely believed power-law spreading is observed in numerical experiments. Some ideas on a theoretical explanation for this deviation are presented, but a conclusive theory could not be found due to the complicated phase space structure in this case. Nevertheless, it is hoped that the techniques and results presented in this work will help to eventually understand this controversely discussed case as well.
We investigate properties of quantum mechanical systems in the light of quantum information theory. We put an emphasize on systems with infinite-dimensional Hilbert spaces, so-called continuous-variable systems'', which are needed to describe quantum optics beyond the single photon regime and other Bosonic quantum systems. We present methods to obtain a description of such systems from a series of measurements in an efficient manner and demonstrate the performance in realistic situations by means of numerical simulations. We consider both unconditional quantum state tomography, which is applicable to arbitrary systems, and tomography of matrix product states. The latter allows for the tomography of many-body systems because the necessary number of measurements scales merely polynomially with the particle number, compared to an exponential scaling in the generic case. We also present a method to realize such a tomography scheme for a system of ultra-cold atoms in optical lattices. Furthermore, we discuss in detail the possibilities and limitations of using continuous-variable systems for measurement-based quantum computing. We will see that the distinction between Gaussian and non-Gaussian quantum states and measurements plays an crucial role. We also provide an algorithm to solve the large and interesting class of naturally occurring Hamiltonians, namely frustration free ones, efficiently and use this insight to obtain a simple approximation method for slightly frustrated systems. To achieve this goals, we make use of, among various other techniques, the well developed theory of matrix product states, tensor networks, semi-definite programming, and matrix analysis.
The microscopic origin of ultrafast demagnetization, i.e. the quenching of the magnetization of a ferromagnetic metal on a sub-picosecond timescale after laser excitation, is still only incompletely understood, despite a large body of experimental and theoretical work performed since the discovery of the effect more than 15 years ago. Time- and element-resolved x-ray magnetic circular dichroism measurements can provide insight into the microscopic processes behind ultrafast demagnetization as well as its dependence on materials properties. Using the BESSY II Femtoslicing facility, a storage ring based source of 100 fs short soft x-ray pulses, ultrafast magnetization dynamics of ferromagnetic NiFe and GdTb alloys as well as a Au/Ni layered structure were investigated in laser pump – x-ray probe experiments. After laser excitation, the constituents of Ni50Fe50 and Ni80Fe20 exhibit distinctly different time constants of demagnetization, leading to decoupled dynamics, despite the strong exchange interaction that couples the Ni and Fe sublattices under equilibrium conditions. Furthermore, the time constants of demagnetization for Ni and Fe are different in Ni50Fe50 and Ni80Fe20, and also different from the values for the respective pure elements. These variations are explained by taking the magnetic moments of the Ni and Fe sublattices, which are changed from the pure element values due to alloying, as well as the strength of the intersublattice exchange interaction into account. GdTb exhibits demagnetization in two steps, typical for rare earths. The time constant of the second, slower magnetization decay was previously linked to the strength of spin-lattice coupling in pure Gd and Tb, with the stronger, direct spin-lattice coupling in Tb leading to a faster demagnetization. In GdTb, the demagnetization of Gd follows Tb on all timescales. This is due to the opening of an additional channel for the dissipation of spin angular momentum to the lattice, since Gd magnetic moments in the alloy are coupled via indirect exchange interaction to neighboring Tb magnetic moments, which are in turn strongly coupled to the lattice. Time-resolved measurements of the ultrafast demagnetization of a Ni layer buried under a Au cap layer, thick enough to absorb nearly all of the incident pump laser light, showed a somewhat slower but still sub-picosecond demagnetization of the buried Ni layer in Au/Ni compared to a Ni reference sample. Supported by simulations, I conclude that demagnetization can thus be induced by transport of hot electrons excited in the Au layer into the Ni layer, without the need for direct interaction between photons and spins.
Structural dynamics of photoexcited nanolayered perovskites studied by ultrafast x-ray diffraction
(2012)
This publication-based thesis represents a contribution to the active research field of ultrafast structural dynamics in laser-excited nanostructures. The investigation of such dynamics is mandatory for the understanding of the various physical processes on microscopic scales in complex materials which have great potentials for advances in many technological applications. I theoretically and experimentally examine the coherent, incoherent and anharmonic lattice dynamics of epitaxial metal-insulator heterostructures on timescales ranging from femtoseconds up to nanoseconds. To infer information on the transient dynamics in the photoexcited crystal lattices experimental techniques using ultrashort optical and x-ray pulses are employed. The experimental setups include table-top sources as well as large-scale facilities such as synchrotron sources. At the core of my work lies the development of a linear-chain model to simulate and analyze the photoexcited atomic-scale dynamics. The calculated strain fields are then used to simulate the optical and x-ray response of the considered thin films and multilayers in order to relate the experimental signatures to particular structural processes. This way one obtains insight into the rich lattice dynamics exhibiting coherent transport of vibrational energy from local excitations via delocalized phonon modes of the samples. The complex deformations in tailored multilayers are identified to give rise to highly nonlinear x-ray diffraction responses due to transient interference effects. The understanding of such effects and the ability to precisely calculate those are exploited for the design of novel ultrafast x-ray optics. In particular, I present several Phonon Bragg Switch concepts to efficiently generate ultrashort x-ray pulses for time-resolved structural investigations. By extension of the numerical models to include incoherent phonon propagation and anharmonic lattice potentials I present a new view on the fundamental research topics of nanoscale thermal transport and anharmonic phonon-phonon interactions such as nonlinear sound propagation and phonon damping. The former issue is exemplified by the time-resolved heat conduction from thin SrRuO3 films into a SrTiO3 substrate which exhibits an unexpectedly slow heat conductivity. Furthermore, I discuss various experiments which can be well reproduced by the versatile numerical models and thus evidence strong lattice anharmonicities in the perovskite oxide SrTiO3. The thesis also presents several advances of experimental techniques such as time-resolved phonon spectroscopy with optical and x-ray photons as well as concepts for the implementation of x-ray diffraction setups at standard synchrotron beamlines with largely improved time-resolution for investigations of ultrafast structural processes. This work forms the basis for ongoing research topics in complex oxide materials including electronic correlations and phase transitions related to the elastic, magnetic and polarization degrees of freedom.
In the western hemisphere, the piano is one of the most important instruments. While its evolution lasted for more than three centuries, and the most important physical aspects have already been investigated, some parts in the characterization of the piano remain not well understood. Considering the pivotal piano soundboard, the effect of ribs mounted on the board exerted on the sound radiation and propagation in particular, is mostly neglected in the literature. The present investigation deals exactly with the sound wave propagation effects that emerge in the presence of an array of equally-distant mounted ribs at a soundboard. Solid-state theory proposes particular eigenmodes and -frequencies for such arrangements, which are comparable to single units in a crystal. Following this 'linear chain model' (LCM), differences in the frequency spectrum are observable as a distinct band structure. Also, the amplitudes of the modes are changed, due to differences of the damping factor. These scattering effects were not only investigated for a well-understood conceptional rectangular soundboard (multichord), but also for a genuine piano resonance board manufactured by the piano maker company 'C. Bechstein Pianofortefabrik'. To obtain the possibility to distinguish between the characterizing spectra both with and without mounted ribs, the typical assembly plan for the Bechstein instrument was specially customized. Spectral similarities and differences between both boards are found in terms of damping and tone. Furthermore, specially prepared minimal-invasive piezoelectric polymer sensors made from polyvinylidene fluoride (PVDF) were used to record solid-state vibrations of the investigated system. The essential calibration and characterization of these polymer sensors was performed by determining the electromechanical conversion, which is represented by the piezoelectric coefficient. Therefore, the robust 'sinusoidally varying external force' method was applied, where a dynamic force perpendicular to the sensor's surface, generates movable charge carriers. Crucial parameters were monitored, with the frequency response function as the most important one for acousticians. Along with conventional condenser microphones, the sound was measured as solid-state vibration as well as airborne wave. On this basis, statements can be made about emergence, propagation, and also the overall radiation of the generated modes of the vibrating system. Ultimately, these results acoustically characterize the entire system.
Theory of mRNA degradation
(2012)
One of the central themes of biology is to understand how individual cells achieve a high fidelity in gene expression. Each cell needs to ensure accurate protein levels for its proper functioning and its capability to proliferate. Therefore, complex regulatory mechanisms have evolved in order to render the expression of each gene dependent on the expression level of (all) other genes. Regulation can occur at different stages within the framework of the central dogma of molecular biology. One very effective and relatively direct mechanism concerns the regulation of the stability of mRNAs. All organisms have evolved diverse and powerful mechanisms to achieve this. In order to better comprehend the regulation in living cells, biochemists have studied specific degradation mechanisms in detail. In addition to that, modern high-throughput techniques allow to obtain quantitative data on a global scale by parallel analysis of the decay patterns of many different mRNAs from different genes. In previous studies, the interpretation of these mRNA decay experiments relied on a simple theoretical description based on an exponential decay. However, this does not account for the complexity of the responsible mechanisms and, as a consequence, the exponential decay is often not in agreement with the experimental decay patterns. We have developed an improved and more general theory of mRNA degradation which provides a general framework of mRNA expression and allows describing specific degradation mechanisms. We have made an attempt to provide detailed models for the regulation in different organisms. In the yeast S. cerevisiae, different degradation pathways are known to compete and furthermore most of them rely on the biochemical modification of mRNA molecules. In bacteria such as E. coli, degradation proceeds primarily endonucleolytically, i.e. it is governed by the initial cleavage within the coding region. In addition, it is often coupled to the level of maturity and the size of the polysome of an mRNA. Both for S. cerevisiae and E. coli, our descriptions lead to a considerable improvement of the interpretation of experimental data. The general outcome is that the degradation of mRNA must be described by an age-dependent degradation rate, which can be interpreted as a consequence of molecular aging of mRNAs. Within our theory, we find adequate ways to address this much debated topic from a theoretical perspective. The improvements of the understanding of mRNA degradation can be readily applied to further comprehend the mRNA expression under different internal or environmental conditions such as after the induction of transcription or stress application. Also, the role of mRNA decay can be assessed in the context of translation and protein synthesis. The ultimate goal in understanding gene regulation mediated by mRNA stability will be to identify the relevance and biological function of different mechanisms. Once more quantitative data will become available, our description allows to elaborate the role of each mechanism by devising a suitable model.
Thermal and quantum fluctuations of the electromagnetic near field of atoms and macroscopic bodies play a key role in quantum electrodynamics (QED), as in the Lamb shift. They lead, e.g., to atomic level shifts, dispersion interactions (Van der Waals-Casimir-Polder interactions), and state broadening (Purcell effect) because the field is subject to boundary conditions. Such effects can be observed with high precision on the mesoscopic scale which can be accessed in micro-electro-mechanical systems (MEMS) and solid-state-based magnetic microtraps for cold atoms (‘atom chips’). A quantum field theory of atoms (molecules) and photons is adapted to nonequilibrium situations. Atoms and photons are described as fully quantized while macroscopic bodies can be included in terms of classical reflection amplitudes, similar to the scattering approach of cavity QED. The formalism is applied to the study of nonequilibrium two-body potentials. We then investigate the impact of the material properties of metals on the electromagnetic surface noise, with applications to atomic trapping in atom-chip setups and quantum computing, and on the magnetic dipole contribution to the Van der Waals-Casimir-Polder potential in and out of thermal equilibrium. In both cases, the particular properties of superconductors are of high interest. Surface-mode contributions, which dominate the near-field fluctuations, are discussed in the context of the (partial) dynamic atomic dressing after a rapid change of a system parameter and in the Casimir interaction between two conducting plates, where nonequilibrium configurations can give rise to repulsion.
In the course of this thesis gold nanoparticle/polyelectrolyte multilayer structures were prepared, characterized, and investigated according to their static and ultrafast optical properties. Using the dip-coating or spin-coating layer-by-layer deposition method, gold-nanoparticle layers were embedded in a polyelectrolyte environment with high structural perfection. Typical structures exhibit four repetition units, each consisting of one gold-particle layer and ten double layers of polyelectrolyte (cationic+anionic polyelectrolyte). The structures were characterized by X-ray reflectivity measurements, which reveal Bragg peaks up to the seventh order, evidencing the high stratication of the particle layers. In the same measurements pronounced Kiessig fringes were observed, which indicate a low global roughness of the samples. Atomic force microscopy (AFM) images veried this low roughness, which results from the high smoothing capabilities of polyelectrolyte layers. This smoothing effect facilitates the fabrication of stratified nanoparticle/polyelectrolyte multilayer structures, which were nicely illustrated in a transmission electron microscopy image. The samples' optical properties were investigated by static spectroscopic measurements in the visible and UV range. The measurements revealed a frequency shift of the reflectance and of the plasmon absorption band, depending on the thickness of the polyelectrolyte layers that cover a nanoparticle layer. When the covering layer becomes thicker than the particle interaction range, the absorption spectrum becomes independent of the polymer thickness. However, the reflectance spectrum continues shifting to lower frequencies (even for large thicknesses). The range of plasmon interaction was determined to be in the order of the particle diameter for 10 nm, 20 nm, and 150 nm particles. The transient broadband complex dielectric function of a multilayer structure was determined experimentally by ultrafast pump-probe spectroscopy. This was achieved by simultaneous measurements of the changes in the reflectance and transmittance of the excited sample over a broad spectral range. The changes in the real and imaginary parts of the dielectric function were directly deduced from the measured data by using a recursive formalism based on the Fresnel equations. This method can be applied to a broad range of nanoparticle systems where experimental data on the transient dielectric response are rare. This complete experimental approach serves as a test ground for modeling the dielectric function of a nanoparticle compound structure upon laser excitation.
Tensorial spacetime geometries carrying predictive, interpretable and quantizable matter dynamics
(2012)
Which tensor fields G on a smooth manifold M can serve as a spacetime structure? In the first part of this thesis, it is found that only a severely restricted class of tensor fields can provide classical spacetime geometries, namely those that can carry predictive, interpretable and quantizable matter dynamics. The obvious dependence of this characterization of admissible tensorial spacetime geometries on specific matter is not a weakness, but rather presents an insight: it was Maxwell theory that justified Einstein to promote Lorentzian manifolds to the status of a spacetime geometry. Any matter that does not mimick the structure of Maxwell theory, will force us to choose another geometry on which the matter dynamics of interest are predictive, interpretable and quantizable. These three physical conditions on matter impose three corresponding algebraic conditions on the totally symmetric contravariant coefficient tensor field P that determines the principal symbol of the matter field equations in terms of the geometric tensor G: the tensor field P must be hyperbolic, time-orientable and energy-distinguishing. Remarkably, these physically necessary conditions on the geometry are mathematically already sufficient to realize all kinematical constructions familiar from Lorentzian geometry, for precisely the same structural reasons. This we were able to show employing a subtle interplay of convex analysis, the theory of partial differential equations and real algebraic geometry. In the second part of this thesis, we then explore general properties of any hyperbolic, time-orientable and energy-distinguishing tensorial geometry. Physically most important are the construction of freely falling non-rotating laboratories, the appearance of admissible modified dispersion relations to particular observers, and the identification of a mechanism that explains why massive particles that are faster than some massless particles can radiate off energy until they are slower than all massless particles in any hyperbolic, time-orientable and energy-distinguishing geometry. In the third part of the thesis, we explore how tensorial spacetime geometries fare when one wants to quantize particles and fields on them. This study is motivated, in part, in order to provide the tools to calculate the rate at which superluminal particles radiate off energy to become infraluminal, as explained above. Remarkably, it is again the three geometric conditions of hyperbolicity, time-orientability and energy-distinguishability that allow the quantization of general linear electrodynamics on an area metric spacetime and the quantization of massive point particles obeying any admissible dispersion relation. We explore the issue of field equations of all possible derivative order in rather systematic fashion, and prove a practically most useful theorem that determines Dirac algebras allowing the reduction of derivative orders. The final part of the thesis presents the sketch of a truly remarkable result that was obtained building on the work of the present thesis. Particularly based on the subtle duality maps between momenta and velocities in general tensorial spacetimes, it could be shown that gravitational dynamics for hyperbolic, time-orientable and energy distinguishable geometries need not be postulated, but the formidable physical problem of their construction can be reduced to a mere mathematical task: the solution of a system of homogeneous linear partial differential equations. This far-reaching physical result on modified gravity theories is a direct, but difficult to derive, outcome of the findings in the present thesis. Throughout the thesis, the abstract theory is illustrated through instructive examples.
Particles in Saturn’s main rings range in size from dust to even kilometer-sized objects. Their size distribution is thought to be a result of competing accretion and fragmentation processes. While growth is naturally limited in tidal environments, frequent collisions among these objects may contribute to both accretion and fragmentation. As ring particles are primarily made of water ice attractive surface forces like adhesion could significantly influence these processes, finally determining the resulting size distribution. Here, we derive analytic expressions for the specific self-energy Q and related specific break-up energy Q⋆ of aggregates. These expressions can be used for any aggregate type composed of monomeric constituents. We compare these expressions to numerical experiments where we create aggregates of various types including: regular packings like the face-centered cubic (fcc), Ballistic Particle Cluster Aggregates (BPCA), and modified BPCAs including e.g. different constituent size distributions. We show that accounting for attractive surface forces such as adhesion a simple approach is able to: a) generally account for the size dependence of the specific break-up energy for fragmentation to occur reported in the literature, namely the division into “strength” and “gravity” regimes, and b) estimate the maximum aggregate size in a collisional ensemble to be on the order of a few meters, consistent with the maximum aggregate size observed in Saturn’s rings of about 10m.
The inspiral and merger of two black holes is among the most exciting and extreme events in our universe. Being one of the loudest sources of gravitational waves, they provide a unique dynamical probe of strong-field general relativity and a fertile ground for the observation of fundamental physics. While the detection of gravitational waves alone will allow us to observe our universe through an entirely new window, combining the information obtained from both gravitational wave and electro-magnetic observations will allow us to gain even greater insight in some of the most exciting astrophysical phenomena. In addition, binary black-hole mergers serve as an intriguing tool to study the geometry of space-time itself. In this dissertation we study the merger process of binary black-holes in a variety of conditions. Our results show that asymmetries in the curvature distribution on the common apparent horizon are correlated to the linear momentum acquired by the merger remnant. We propose useful tools for the analysis of black holes in the dynamical and isolated horizon frameworks and shed light on how the final merger of apparent horizons proceeds after a common horizon has already formed. We connect mathematical theorems with data obtained from numerical simulations and provide a first glimpse on the behavior of these surfaces in situations not accessible to analytical tools. We study electro-magnetic counterparts of super-massive binary black-hole mergers with fully 3D general relativistic simulations of binary black-holes immersed both in a uniform magnetic field in vacuum and in a tenuous plasma. We find that while a direct detection of merger signatures with current electro-magnetic telescopes is unlikely, secondary emission, either by altering the accretion rate of the circumbinary disk or by synchrotron radiation from accelerated charges, may be detectable. We propose a novel approach to measure the electro-magnetic radiation in these simulations and find a non-collimated emission that dominates over the collimated one appearing in the form of dual jets associated with each of the black holes. Finally, we provide an optimized gravitational wave detection pipeline using phenomenological waveforms for signals from compact binary coalescence and show that by including spin effects in the waveform templates, the detection efficiency is drastically improved as well as the bias on recovered source parameters reduced. On the whole, this disseration provides evidence that a multi-messenger approach to binary black-hole merger observations provides an exciting prospect to understand these sources and, ultimately, our universe.
One of the most exciting predictions of Einstein's theory of gravitation that have not yet been proven experimentally by a direct detection are gravitational waves. These are tiny distortions of the spacetime itself, and a world-wide effort to directly measure them for the first time with a network of large-scale laser interferometers is currently ongoing and expected to provide positive results within this decade. One potential source of measurable gravitational waves is the inspiral and merger of two compact objects, such as binary black holes. Successfully finding their signature in the noise-dominated data of the detectors crucially relies on accurate predictions of what we are looking for. In this thesis, we present a detailed study of how the most complete waveform templates can be constructed by combining the results from (A) analytical expansions within the post-Newtonian framework and (B) numerical simulations of the full relativistic dynamics. We analyze various strategies to construct complete hybrid waveforms that consist of a post-Newtonian inspiral part matched to numerical-relativity data. We elaborate on exsisting approaches for nonspinning systems by extending the accessible parameter space and introducing an alternative scheme based in the Fourier domain. Our methods can now be readily applied to multiple spherical-harmonic modes and precessing systems. In addition to that, we analyze in detail the accuracy of hybrid waveforms with the goal to quantify how numerous sources of error in the approximation techniques affect the application of such templates in real gravitational-wave searches. This is of major importance for the future construction of improved models, but also for the correct interpretation of gravitational-wave observations that are made utilizing any complete waveform family. In particular, we comprehensively discuss how long the numerical-relativity contribution to the signal has to be in order to make the resulting hybrids accurate enough, and for currently feasible simulation lengths we assess the physics one can potentially do with template-based searches.
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
Eumelanin ist ein Fluorophor mit teilweise recht ungewöhnlichen spektralen Eigenschaften. Unter anderem konnten in früheren Veröffentlichungen Unterschiede zwischen dem 1- und 2-photonen-angeregtem Fluoreszenzspektrum beobachtet werden, weshalb im nichtlinearen Anregungsfall ein schrittweiser Anregungsprozess vermutet wurde. Um diese und weitere optische Eigenschaften des Eumelanins besser zu verstehen, wurden in der vorliegenden Arbeit vielfältige messmethodische Ansätze der linearen und nichtlinearen Optik an synthetischem Eumelanin in 0,1M NaOH verfolgt. Aus den Ergebnissen wurde ein Modell abgeleitet, welches die beobachteten photonischen Eigenschaften konsistent beschreibt. In diesem kaskadierten Zustandsmodell (Kaskaden-Modell) wird die aufgenommene Photonenenergie schrittweise von Anregungszuständen hoher Übergangsenergien zu Anregungszuständen niedrigerer Übergangsenergien transferiert. Messungen der transienten Absorption ergaben dominante Anteile mit kurzen Lebensdauern im ps-Bereich und ließen damit auf eine hohe Relaxationsgeschwindigkeit entlang der Kaskade schließen. Durch Untersuchung der nichtlinear angeregten Fluoreszenz von verschieden großen Eumelanin-Aggregaten konnte gezeigt werden, dass Unterschiede zwischen dem linear und nichtlinear angeregten Fluoreszenzspektrum nicht nur durch einen schrittweisen Anregungsprozess bei nichtlinearer Anregung sondern auch durch Unterschiede in den Verhältnissen der Quantenausbeuten zwischen kleinen und großen Aggregaten beim Wechsel von linearer zu nichtlinearer Anregung begründet sein können. Durch Bestimmung des Anregungswirkungsquerschnitts und der Anregungspulsdauer-Abhängigkeit der nichtlinear angeregten Fluoreszenz von Eumelanin konnte jedoch ein schrittweiser 2-Photonen-Anregungsprozess über einen Zwischenzustand mit Lebendsdauern im ps-Bereich nachgewiesen werden.
A key non-destructive technique for analysis, optimization and developing of new functional materials such as sensors, transducers, electro-optical and memory devices is presented. The Thermal-Pulse Tomography (TPT) provides high-resolution three-dimensional images of electric field and polarization distribution in a material. This thermal technique use a pulsed heating by means of focused laser light which is absorbed by opaque electrodes. The diffusion of the heat causes changes in the sample geometry, generating a short-circuit current or change in surface potential, which contains information about the spatial distribution of electric dipoles or space charges. Afterwards, a reconstruction of the internal electric field and polarization distribution in the material is possible via Scale Transformation or Regularization methods. In this way, the TPT was used for the first time to image the inhomogeneous ferroelectric switching in polymer ferroelectric films (candidates to memory devices). The results shows the typical pinning of electric dipoles in the ferroelectric polymer under study and support the previous hypotheses of a ferroelectric reversal at a grain level via nucleation and growth. In order to obtain more information about the impact of the lateral and depth resolution of the thermal techniques, the TPT and its counterpart called Focused Laser Intensity Modulation Method (FLIMM) were implemented in ferroelectric films with grid-shaped electrodes. The results from both techniques, after the data analysis with different regularization and scale methods, are in total agreement. It was also revealed a possible overestimated lateral resolution of the FLIMM and highlights the TPT method as the most efficient and reliable thermal technique. After an improvement in the optics, the Thermal-Pulse Tomography method was implemented in polymer-dispersed liquid crystals (PDLCs) films, which are used in electro-optical applications. The results indicated a possible electrostatic interaction between the COH group in the liquid crystals and the fluorinate atoms of the used ferroelectric matrix. The geometrical parameters of the LC droplets were partially reproduced as they were compared with Scanning Electron Microscopy (SEM) images. For further applications, it is suggested the use of a non-strong-ferroelectric polymer matrix. In an effort to develop new polymerferroelectrets and for optimizing their properties, new multilayer systems were inspected. The results of the TPT method showed the non-uniformity of the internal electric-field distribution in the shaped-macrodipoles and thus suggested the instability of the sample. Further investigation on multilayers ferroelectrets was suggested and the implementation of less conductive polymers layers too.
This thesis contains several theoretical studies on optomechanical systems, i.e. physical devices where mechanical degrees of freedom are coupled with optical cavity modes. This optomechanical interaction, mediated by radiation pressure, can be exploited for cooling and controlling mechanical resonators in a quantum regime. The goal of this thesis is to propose several new ideas for preparing meso- scopic mechanical systems (of the order of 10^15 atoms) into highly non-classical states. In particular we have shown new methods for preparing optomechani-cal pure states, squeezed states and entangled states. At the same time, proce-dures for experimentally detecting these quantum effects have been proposed. In particular, a quantitative measure of non classicality has been defined in terms of the negativity of phase space quasi-distributions. An operational al- gorithm for experimentally estimating the non-classicality of quantum states has been proposed and successfully applied in a quantum optics experiment. The research has been performed with relatively advanced mathematical tools related to differential equations with periodic coefficients, classical and quantum Bochner’s theorems and semidefinite programming. Nevertheless the physics of the problems and the experimental feasibility of the results have been the main priorities.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
Ein neuentwickeltes azobenzenhaltiges Material, das auf einem supramolekularen Konzept basiert, wird bezüglich seiner Strukturbildung während einer holografischen Belichtung bei 488 nm untersucht. Im Mittelpunkt stehen dabei eindimensionale, sinusförmige Reliefs mit Periodizitäten kleiner 500 nm. Es wird gezeigt, wie der Grad der Vernetzung der photosensitiven Schicht die Strukturbildung in diesem Größenbereich beeinflusst. Zur Maximierung der Strukturtiefe werden gezielt Prozessparameter der Belichtung sowie Materialparameter variiert. Unter Standardbedingungen und moderaten Belichtungsintensitäten von ca. 200 mW/cm² bilden sich innerhalb weniger Minuten bei einer Periode von 400 nm Strukturtiefen von bis zu 80nm aus. Durch die Beeinflussung von Materialparametern, wie Oberflächenspannung und Viskosität, wird die maximale Strukturtiefe auf 160nm verdoppelt. Durch Mehrfachbelichtungen wird auch die Bildung von zweidimensionalen Gittern untersucht. Die Originalstrukturen werden in einem Abformverfahren kopiert und in Schichten von unter UV-Licht aushärtenden Polymeren übertragen. Durch das Abformen kommt es zu einer geringfügigen Verschlechterung der Oberflächenqualität sowie Abnahme der Strukturtiefe. Dieser Verlust wird durch eine Verringerung der Prozesstemperatur verringert. Mithilfe kopierter Oberflächengitter werden organische Distributed Feedback-(DFB)-Laser zweiter Ordnung hergestellt, um den Einfluss von Gitterparametern auf die Emissionseigenschaften dieser Laser zu untersuchen. Dazu erfolgt zunächst die Charakterisierung der optischen Verstärkungseigenschaften ausgewählter organischer Emittermaterialien mittels der Variablen Strichlängenmethode. Das mit dem Laserfarbstoff Pyrromthen567 (PM567) dotierte Polystyrol (PS) zeigt dabei trotz konzentrationsbedingter geringer Absorption eine vergleichsweise geringe Gewinnschwelle von 50µJ/cm² bei ca. 575 nm. Das aktive Gast-Wirt-System der konjugierten Polymere MEH-PPV und F8BT* weist eine hohe Absorption und eine kleine Gewinnschwelle von 2,5 µJ/cm² bei 630 nm auf. Dieses Verhalten spiegelt sich auch in den Emissionseigenschaften der damit hergestellten DFB-Laser wieder. Die Dicke der aktiven Schichten liegen im Bereich hunderter Nanometer und wird so eingestellt, dass sich nur die transversalen Grundmoden im Wellenleiter ausbreiten können. Die Gitterperiode sind so gewählt, dass ein Lichtmode im Verstärkungsbereich des Emittermaterials liegt. Die Emissionslinien der Laser sind mit FWHM-Werten von bis zu 0,3 nm spektral sehr schmalbandig und weisen auf eine sehr gute Gitterqualität hin. Die Untersuchungen liefern minimale Laserschwellen und maximale differentielle Effizienzen von 4,0µJ/cm² und 8,4% für MEH-PPV in F8BT* (bei ca. 640nm) sowie 80 µJ/cm² und 0,9% für PM567 in PS (bei ca. 575 nm). Die Vergrößerung der Strukturtiefe von 40nm auf 80nm in mit MEH-PPV dotierten F8BT*-Lasern zu einem deutlichen Anstieg der ausgekoppelten Energie sowie der differentiellen Effizienz und einem geringen Absinken der Laserschwelle. Dies ist ein Resultat der erhöhten Kopplung von Lasermode und Gitter. Die Emission von DFB-Lasern mit zweidimensionalen Oberflächengittern zeigen eine Verringerung der Divergenz aber kein Einfluss auf die Laserschwelle. Abschließend erfolgt eine Vermessung der Photostabilität von DFB-Lasern unter verschiedenen Bedingungen. Das Einbringen eines konjugierten Polymers in eine aktive Matrix sowie der Betrieb in einer Stickstoffatmosphäre führen dabei zu einer Erhöhung der Lebensdauer auf über eine Million Pulse. Durch die Kombination von Oberflächengittern in PDMS-Filmen mit elektroaktiven Substraten wird eine elektrisch steuerbare Deformation des Beugungsgitters erreicht und auf einen DFB-Laser übertragen. Die spannungsinduzierte Verformung wird zunächst in Beugungsexperimenten charakterisiert und ein optimaler Arbeitspunkt bestimmt. Mit den beiden Elastomeren SEBS12 und VHB4910 werden in den Gittern maximale Periodenänderungen von 1,3% bzw. 3,4% bei einer Steuerspannung von 2 kV erreicht. Der Unterschied resultiert aus den verschiedenen Elastizitätsmoduln der Materialien. Übertragen auf DFB-Laser resultiert eine Variation der Gitterperiode senkrecht zu den Gitterlinien in einer kontinuierlichen Verschiebung der Emissionswellenlänge. Mit einem Spannungssignal von 3,25 kV wird die schmalbandige Emission eines elastischen DFB-Lasers kontinuierlich um fast 50nm von 604 nm zu 557 nm hin verschoben. Aus dem Deformationsverhalten sowohl der reinen Beugungsgitter als auch der Laser werden Rückschlüsse auf die Elastizität der verwendeten Materialien gezogen und erlauben Verbesserungen der Bauteile.
Mathematik spielt im Physikunterricht eine nicht unerhebliche Rolle - wenn auch eine zwiespältige. Oft wird sie sogar zum Hindernis beim Lernen von Physik und kann ihr emanzipatorisches Potenzial nicht entfalten. Die vorliegende Arbeit stellt zwei Bausteine für eine begründete Konzeption zum Umgang mit Mathematik beim Lernen von Physik zur Verfügung. Im Theorieteil der Arbeit werden zum Einen wissenschaftstheoretische Aspekte der Rolle der Mathematik in der Physik aufgearbeitet und der physikdidaktischen Forschungsgemeinschaft im Zusammenhang zugänglich gemacht. Zum anderen werden Forschungsergebnisse zu Vorstellungen Lernender über Physik und Mathematik sowie im Bereich der Epistemologie zusammengestellt. Im empirischen Teil der Arbeit werden Vorstellungen zur Rolle der Mathematik in der Physik von Schülerinnen und Schülern der Klassenstufen 10 und 12 sowie Physik-Lehramtstudierenden im Grundstudium mit Hilfe eines Fragebogens erhoben und unter Verwendung inhaltsanalytischer bzw. statistischer Methoden ausgewertet. Die Ergebnisse zeigen unter Anderem, dass Mathematik im Physikunterricht entgegen gängiger Meinungen bei den Lernenden nicht negativ, aber zumindest bei jüngeren Lernenden formal und algorithmisch konnotiert ist.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
Um Extremereignisse in der Dynamik des indischen Sommermonsuns (ISM) in der geologischen Vergangenheit zu identifizieren, schlage ich einen neuartigen Ansatz basierend auf der Quantifikation von Fluktuationen in einem nichtlinearen Ähnlichkeitsmaß vor. Dieser reagiert empfindlich auf Zeitabschnitte mit deutlichen Veränderungen in der dynamischen Komplexität kurzer Zeitreihen. Ein mathematischer Zusammenhang zwischen dem neuen Maß und dynamischen Invarianten des zugrundeliegenden Systems wie fraktalen Dimensionen und Lyapunovexponenten wird analytisch hergeleitet. Weiterhin entwickle ich einen statistischen Test zur Schätzung der Signifikanz der so identifizierten dynamischen Übergänge. Die Stärken der Methode werden durch die Aufdeckung von Bifurkationsstrukturen in paradigmatischen Modellsystemen nachgewiesen, wobei im Vergleich zu den traditionellen Lyapunovexponenten eine Identifikation komplexerer dynamischer Übergänge möglich ist. Wir wenden die neu entwickelte Methode zur Analyse realer Messdaten an, um ausgeprägte dynamische Veränderungen auf Zeitskalen von Jahrtausenden in Klimaproxydaten des südasiatischen Sommermonsunsystems während des Pleistozäns aufzuspüren. Dabei zeigt sich, dass viele dieser Übergänge durch den externen Einfluss der veränderlichen Sonneneinstrahlung, sowie durch dem Klimasystem interne Einflussfaktoren auf das Monsunsystem (Eiszeitzyklen der nördlichen Hemisphäre und Einsatz der tropischenWalkerzirkulation) induziert werden. Trotz seiner Anwendbarkeit auf allgemeine Zeitreihen ist der diskutierte Ansatz besonders zur Untersuchung von kurzen Paläoklimazeitreihen geeignet. Die während des ISM über dem indischen Subkontinent fallenden Niederschläge treten, bedingt durch die zugrundeliegende Dynamik der atmosphärischen Zirkulation und topographische Einflüsse, in äußerst komplexen, raumzeitlichen Mustern auf. Ich stelle eine detaillierte Analyse der Sommermonsunniederschläge über der indischen Halbinsel vor, die auf Ereignissynchronisation (ES) beruht, einem Maß für die nichtlineare Korrelation von Punktprozessen wie Niederschlagsereignissen. Mit hierarchischen Clusteringalgorithmen identifiziere ich zunächst Regionen mit besonders kohärenten oder homogenen Monsunniederschlägen. Dabei können auch die Zeitverzögerungsmuster von Regenereignissen rekonstruiert werden. Darüber hinaus führe ich weitere Analysen auf Basis der Theorie komplexer Netzwerke durch. Diese Studien ermöglichen wertvolle Einsichten in räumliche Organisation, Skalen und Strukturen von starken Niederschlagsereignissen oberhalb der 90% und 94% Perzentilen während des ISM (Juni bis September). Weiterhin untersuche ich den Einfluss von verschiedenen, kritischen synoptischen Systemen der Atmosphäre sowie der steilen Topographie des Himalayas auf diese Niederschlagsmuster. Die vorgestellte Methode ist nicht nur geeignet, die Struktur extremer Niederschlagsereignisse zu visualisieren, sondern kann darüber hinaus über der Region atmosphärische Transportwege von Wasserdampf und Feuchtigkeitssenken auf dekadischen Skalen identifizieren.Weiterhin wird ein einfaches, auf komplexen Netzwerken basierendes Verfahren zur Entschlüsselung der räumlichen Feinstruktur und Zeitentwicklung von Monsunniederschlagsextremen während der vergangenen 60 Jahre vorgestellt.
Actin-based directional motility is important for embryonic development, wound healing, immune responses, and development of tissues. Actin and myosin are essential players in this process that can be subdivided into protrusion, adhesion, and traction. Protrusion is the forward movement of the membrane at the leading edge of the cell. Adhesion is required to enable movement along a substrate, and traction finally leads to the forward movement of the entire cell body, including its organelles. While actin polymerization is the main driving force in cell protrusions, myosin motors lead to the contraction of the cell body. The goal of this work was to study the regulatory mechanisms of the motile machinery by selecting a representative key player for each stage of the signaling process: the regulation of Arp2/3 activity by WASP (actin system), the role of cGMP in myosin II assembly (myosin system), and the influence of phosphoinositide signaling (upstream receptor pathway). The model organism chosen for this work was the social ameba Dictyostelium discoideum, due to the well-established knowledge of its cytoskeletal machinery, the easy handling, and the high motility of its vegetative and starvation developed cells. First, I focused on the dynamics of the actin cytoskeleton by modulating the activity of one of its key players, the Arp2/3 complex. This was achieved using the carbazole derivative Wiskostatin, an inhibitor of the Arp2/3 activator WASP. Cells treated with Wiskostatin adopted a round shape, with no of few pseudopodia. With the help of a microfluidic cell squeezer device, I could show that Wiskostatin treated cells display a reduced mechanical stability, comparable to cells treated with the actin disrupting agent Latrunculin A. Furthermore, the WASP inhibited cells adhere stronger to a surface and show a reduced motility and chemotactic performance. However, the overall F-actin content in the cells was not changed. Confocal microscopy and TIRF microscopy imaging showed that the cells maintained an intact actin cortex. Localized dynamic patches of increased actin polymerization were observed that, however, did not lead to membrane deformation. This indicated that the mechanisms of actin-driven force generation were impaired in Wiskostatin treated cells. It is concluded that in these cells, an altered architecture of the cortical network leads to a reduced overall stiffness of the cell, which is insufficient to support the force generation required for membrane deformation and pseudopod formation. Second, the role of cGMP in myosin II dynamics was investigated. Cyclic GMP is known to regulate the association of myosin II with the cytoskeleton. In Dictyostelium, intracellular cGMP levels increase when cells are exposed to chemoattractants, but also in response to osmotic stress. To study the influence of cyclic GMP on actin and myosin II dynamics, I used the laser-induced photoactivation of a DMACM-caged-Br-cGMP to locally release cGMP inside the cell. My results show that cGMP directly activates the myosin II machinery, but is also able to induce an actin response independently of cAMP receptor activation and signaling. The actin response was observed in both vegetative and developed cells. Possible explanations include cGMP-induced actin polymerization through VASP (vasodilator-stimulated phosphoprotein) or through binding of cGMP to cyclic nucleotide-dependent kinases. Finally, I investigated the role of phosphoinositide signaling using the Polyphosphoinositide-Binding Peptide (PBP10) that binds preferentially to PIP2. Phosphoinositides can recruit actin-binding proteins to defined subcellular sites and alter their activity. Neutrophils, as well as developed Dictyostelium cells produce PIP3 in the plasma membrane at their leading edge in response to an external chemotactic gradient. Although not essential for chemotaxis, phosphoinositides are proposed to act as an internal compass in the cell. When treated with the peptide PBP10, cells became round, with fewer or no pseudopods. PH-CRAC translocation to the membrane still occurs, even at low cAMP stimuli, but cell motility (random and directional) was reduced. My data revealed that the decrease in the pool of available PIP2 in the cell is sufficient to impair cell motility, but enough PIP2 remains so that PIP3 is formed in response to chemoattractant stimuli. My data thus highlights how sensitive cell motility and morphology are to changes in the phosphoinositide signaling. In summary, I have analyzed representative regulatory mechanisms that govern key parts of the motile machinery and characterized their impact on cellular properties including mechanical stability, adhesion and chemotaxis.
Organic thin film transistors (TFT) are an attractive option for low cost electronic applications and may be used for active matrix displays and for RFID applications. To extend the range of applications there is a need to develop and optimise the performance of non-volatile memory devices that are compatible with the solution-processing fabrication procedures used in plastic electronics. A possible candidate is an organic TFT incorporating the ferroelectric co-polymer poly(vinylidenefluoride-trifluoroethylene)(P(VDF-TrFE)) as the gate insulator. Dielectric measurements have been carried out on all-organic metal-insulator-semiconductor structures with the ferroelectric polymer poly(vinylidenefluoride-trifluoroethylene) (P(VDF-TrFE)) as the gate insu-lator. The capacitance spectra of MIS devices, were measured under different biases, showing the effect of charge accumulation and depletion on the Maxwell-Wagner peak. The position and height of this peak clearly indicates the lack of stable depletion behavior and the decrease of mobility when increasing the depletion zone width, i.e. upon moving into the P3HT bulk. The lack of stable depletion was further investigated with capacitance-voltage (C-V) measurements. When the structure was driven into depletion, C-V plots showed a positive flat-band voltage shift, arising from the change in polarization state of the ferroelectric insulator. When biased into accumulation, the polarization was reversed. It is shown that the two polarization states are stable i.e. no depolarization occurs below the coercive field. However, negative charge trapped at the semiconductor-insulator interface during the depletion cycle masks the negative shift in flat-band voltage expected during the sweep to accumulation voltages. The measured output characteristics of the studied ferroelectric-field-effect transistors confirmed the results of the C-V plots. Furthermore, the results indicated a trapping of electrons at the positively charged surfaces of the ferroelectrically polarized P(VDF-TrFE) crystallites near the insulator/semiconductor in-terface during the first poling cycles. The study of the MIS structure by means of thermally stimulated current (TSC) revealed further evidence for the stability of the polarization under depletion voltages. It was shown, that the lack of stable depletion behavior is caused by the compensation of the orientational polarization by fixed electrons at the interface and not by the depolarization of the insulator, as proposed in several publications. The above results suggest a performance improvement of non-volatile memory devices by the optimization of the interface.
The Casimir-Polder interaction between a single neutral atom and a nearby surface, arising from the (quantum and thermal) fluctuations of the electromagnetic field, is a cornerstone of cavity quantum electrodynamics (cQED), and theoretically well established. Recently, Bose-Einstein condensates (BECs) of ultracold atoms have been used to test the predictions of cQED. The purpose of the present thesis is to upgrade single-atom cQED with the many-body theory needed to describe trapped atomic BECs. Tools and methods are developed in a second-quantized picture that treats atom and photon fields on the same footing. We formulate a diagrammatic expansion using correlation functions for both the electromagnetic field and the atomic system. The formalism is applied to investigate, for BECs trapped near surfaces, dispersion interactions of the van der Waals-Casimir-Polder type, and the Bosonic stimulation in spontaneous decay of excited atomic states. We also discuss a phononic Casimir effect, which arises from the quantum fluctuations in an interacting BEC.
Active Galactic Nuclei (AGN) are powered by gas accretion onto supermassive Black Holes (BH). The luminosity of AGN can exceed the integrated luminosity of their host galaxies by orders of magnitude, which are then classified as Quasi-Stellar Objects (QSOs). Some mechanisms are needed to trigger the nuclear activity in galaxies and to feed the nuclei with gas. Among several possibilities, such as gravitational interactions, bar instabilities, and smooth gas accretion from the environment, the dominant process has yet to be identified. Feedback from AGN may be important an important ingredient of the evolution of galaxies. However, the details of this coupling between AGN and their host galaxies remain unclear. In this work we aim to investigate the connection between the AGN and their host galaxies by studying the properties of the extendend ionised gas around AGN. Our study is based on observations of ~50 luminous, low-redshift (z<0.3) QSOs using the novel technique of integral field spectroscopy that combines imaging and spectroscopy. After spatially separating the emission of AGN-ionised gas from HII regions, ionised solely by recently formed massive stars, we demonstrate that the specific star formation rates in several disc-dominated AGN hosts are consistent with those of normal star forming galaxies, while others display no detectable star formation activity. Whether the star formation has been actively suppressed in those particular host galaxies by the AGN, or their gas content is intrinsically low, remains an open question. By studying the kinematics of the ionised gas, we find evidence for non-gravitational motions and outflows on kpc scales only in a few objects. The gas kinematics in the majority of objects however indicate a gravitational origin. It suggests that the importance of AGN feedback may have been overrated in theoretical works, at least at low redshifts. The [OIII] line is the strongest optical emission line for AGN-ionised gas, which can be extended over several kpc scales, usually called the Narrow-Line Region (NLR). We perform a systematic investigation of the NLR size and determine a NLR size-luminosity relation that is consistent with the scenario of a constant ionisation parameter throughout the NLR. We show that previous narrow-band imaging with the Hubble Space Telescope underestimated the NLR size by a factor of >2 and that the continuum AGN luminosity is better correlated with the NLR size than the [OIII] luminosity. These affects may account for the different NLR size-luminosity relations reported in previous studies. On the other hand, we do not detect extended NLRs around all QSOs, and demonstrate that the detection of extended NLRs goes along with radio emission. We employ emission line ratios as a diagnostic for the abundance of heavy elements in the gas, i.e. its metallicity, and find that the radial metallicity gradients are always flatter than in inactive disc-dominated galaxies. This can be interpreted as evidence for radial gas flows from the outskirts of these galaxies to the nucleus. Recent or ongoing galaxy interactions are likely responsible for this effect and may turn out to be a common prerequisite for QSO activity. The metallicity of bulge-dominated hosts are systematically lower than their disc-dominated counterparts, which we interpret as evidence for minor mergers, supported by our detailed study of the bulge-dominated host of the luminous QSO HE 1029-1401, or smooth gas accretion from the environment. In this line another new discovery is that HE 2158-0107 at z=0.218 is the most metal poor luminous QSO ever observed. Together with a large (30kpc) extended structure of low metallicity ionised gas, we propose smooth cold gas accretion as the most likely scenario. Theoretical studies suggested that this process is much more important at earlier epochs of the universe, so that HE 2158-0107 might be an ideal laboratory to study this mechanism of galaxy and BH growth at low redshift more detailed in the furture.
Supermassive black holes are a fundamental component of the universe in general and of galaxies in particular. Almost every massive galaxy harbours a supermassive black hole (SMBH) in its center. Furthermore, there is a close connection between the growth of the SMBH and the evolution of its host galaxy, manifested in the relationship between the mass of the black hole and various properties of the galaxy's spheroid component, like its stellar velocity dispersion, luminosity or mass. Understanding this relationship and the growth of SMBHs is essential for our picture of galaxy formation and evolution. In this thesis, I make several contributions to improve our knowledge on the census of SMBHs and on the coevolution of black holes and galaxies. The first route I follow on this road is to obtain a complete census of the black hole population and its properties. Here, I focus particularly on active black holes, observable as Active Galactic Nuclei (AGN) or quasars. These are found in large surveys of the sky. In this thesis, I use one of these surveys, the Hamburg/ESO survey (HES), to study the AGN population in the local volume (z~0). The demographics of AGN are traditionally represented by the AGN luminosity function, the distribution function of AGN at a given luminosity. I determined the local (z<0.3) optical luminosity function of so-called type 1 AGN, based on the broad band B_J magnitudes and AGN broad Halpha emission line luminosities, free of contamination from the host galaxy. I combined this result with fainter data from the Sloan Digital Sky Survey (SDSS) and constructed the best current optical AGN luminosity function at z~0. The comparison of the luminosity function with higher redshifts supports the current notion of 'AGN downsizing', i.e. the space density of the most luminous AGN peaks at higher redshifts and the space density of less luminous AGN peaks at lower redshifts. However, the AGN luminosity function does not reveal the full picture of active black hole demographics. This requires knowledge of the physical quantities, foremost the black hole mass and the accretion rate of the black hole, and the respective distribution functions, the active black hole mass function and the Eddington ratio distribution function. I developed a method for an unbiased estimate of these two distribution functions, employing a maximum likelihood technique and fully account for the selection function. I used this method to determine the active black hole mass function and the Eddington ratio distribution function for the local universe from the HES. I found a wide intrinsic distribution of black hole accretion rates and black hole masses. The comparison of the local active black hole mass function with the local total black hole mass function reveals evidence for 'AGN downsizing', in the sense that in the local universe the most massive black holes are in a less active stage then lower mass black holes. The second route I follow is a study of redshift evolution in the black hole-galaxy relations. While theoretical models can in general explain the existence of these relations, their redshift evolution puts strong constraints on these models. Observational studies on the black hole-galaxy relations naturally suffer from selection effects. These can potentially bias the conclusions inferred from the observations, if they are not taken into account. I investigated the issue of selection effects on type 1 AGN samples in detail and discuss various sources of bias, e.g. an AGN luminosity bias, an active fraction bias and an AGN evolution bias. If the selection function of the observational sample and the underlying distribution functions are known, it is possible to correct for this bias. I present a fitting method to obtain an unbiased estimate of the intrinsic black hole-galaxy relations from samples that are affected by selection effects. Third, I try to improve our census of dormant black holes and the determination of their masses. One of the most important techniques to determine the black hole mass in quiescent galaxies is via stellar dynamical modeling. This method employs photometric and kinematic observations of the galaxy and infers the gravitational potential from the stellar orbits. This method can reveal the presence of the black hole and give its mass, if the sphere of the black hole's gravitational influence is spatially resolved. However, usually the presence of a dark matter halo is ignored in the dynamical modeling, potentially causing a bias on the determined black hole mass. I ran dynamical models for a sample of 12 galaxies, including a dark matter halo. For galaxies for which the black hole's sphere of influence is not well resolved, I found that the black hole mass is systematically underestimated when the dark matter halo is ignored, while there is almost no effect for galaxies with well resolved sphere of influence.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
In the present work synchronization phenomena in complex dynamical systems exhibiting multiple time scales have been analyzed. Multiple time scales can be active in different manners. Three different systems have been analyzed with different methods from data analysis. The first system studied is a large heterogenous network of bursting neurons, that is a system with two predominant time scales, the fast firing of action potentials (spikes) and the burst of repetitive spikes followed by a quiescent phase. This system has been integrated numerically and analyzed with methods based on recurrence in phase space. An interesting result are the different transitions to synchrony found in the two distinct time scales. Moreover, an anomalous synchronization effect can be observed in the fast time scale, i.e. there is range of the coupling strength where desynchronization occurs. The second system analyzed, numerically as well as experimentally, is a pair of coupled CO₂ lasers in a chaotic bursting regime. This system is interesting due to its similarity with epidemic models. We explain the bursts by different time scales generated from unstable periodic orbits embedded in the chaotic attractor and perform a synchronization analysis of these different orbits utilizing the continuous wavelet transform. We find a diverse route to synchrony of these different observed time scales. The last system studied is a small network motif of limit cycle oscillators. Precisely, we have studied a hub motif, which serves as elementary building block for scale-free networks, a type of network found in many real world applications. These hubs are of special importance for communication and information transfer in complex networks. Here, a detailed study on the mechanism of synchronization in oscillatory networks with a broad frequency distribution has been carried out. In particular, we find a remote synchronization of nodes in the network which are not directly coupled. We also explain the responsible mechanism and its limitations and constraints. Further we derive an analytic expression for it and show that information transmission in pure phase oscillators, such as the Kuramoto type, is limited. In addition to the numerical and analytic analysis an experiment consisting of electrical circuits has been designed. The obtained results confirm the former findings.
Das Ziel dieser Arbeit ist die Untersuchung der aktiven Komponenten und ihrer Wechselwirkungen in teilorganischen Hybrid-Solarzellen. Diese bestehen aus einer dünnen Titandioxidschicht, kombiniert mit einer dünnen Polymerschicht. Die Effizienz der Hybrid-Solarzellen wird durch die Lichtabsorption im Polymer, die Dissoziation der gebildeten Exzitonen an der aktiven Grenzfläche zwischen TiO2 und Polymer, sowie durch Generation und Extraktion freier Ladungsträger bestimmt. Zur Optimierung der Solarzellen wurden grundlegende physikalische Wechselwirkungen zwischen den verwendeten Materialen sowie der Einfluss verschiedener Herstellungsparameter untersucht. Unter anderem wurden Fragen zum optimalen Materialeinsatz und Präparationsbedingungen beantwortet sowie grundlegende Einflüsse wie Schichtmorphologie und Polymerinfiltration näher betrachtet. Zunächst wurde aus unterschiedlich hergestelltem Titandioxid (Akzeptor-Schicht) eine Auswahl für den Einsatz in Hybrid-Solarzellen getroffen. Kriterium war hierbei die unterschiedliche Morphologie aufgrund der Oberflächenbeschaffenheit, der Film-Struktur, der Kristallinität und die daraus resultierenden Solarzelleneigenschaften. Für die anschließenden Untersuchungen wurden mesoporöse TiO2–Filme aus einer neuen Nanopartikel-Synthese, welche es erlaubt, kristalline Partikel schon während der Synthese herzustellen, als Elektronenakzeptor und konjugierte Polymere auf Poly(p-Phenylen-Vinylen) (PPV)- bzw. Thiophenbasis als Donatormaterial verwendet. Bei der thermischen Behandlung der TiO2-Schichten erfolgt eine temperaturabhängige Änderung der Morphologie, jedoch nicht der Kristallstruktur. Die Auswirkungen auf die Solarzelleneigenschaften wurden dokumentiert und diskutiert. Um die Vorteile der Nanopartikel-Synthese, die Bildung kristalliner TiO2-Partikel bei tiefen Temperaturen, nutzen zu können, wurden erste Versuche zur UV-Vernetzung durchgeführt. Neben der Beschaffenheit der Oxidschicht wurde auch der Einfluss der Polymermorphologie, bedingt durch Lösungsmittelvariation und Tempertemperatur, untersucht. Hierbei konnte gezeigt werden, dass u.a. die Viskosität der Polymerlösung die Infiltration in die TiO2-Schicht und dadurch die Effizienz der Solarzelle beeinflusst. Ein weiterer Ansatz zur Erhöhung der Effizienz ist die Entwicklung neuer lochleitender Polymere, welche möglichst über einen weiten spektralen Bereich Licht absorbieren und an die Bandlücke des TiO2 angepasst sind. Hierzu wurden einige neuartige Konzepte, z.B. die Kombination von Thiophen- und Phenyl-Einheiten näher untersucht. Auch wurde die Sensibilisierung der Titandioxidschicht in Anlehnung an die höheren Effizienzen der Farbstoffzellen in Betracht gezogen. Zusammenfassend konnten im Rahmen dieser Arbeit wichtige Einflussparameter auf die Funktion hybrider Solarzellen identifiziert und z.T. näher diskutiert werden. Für einige limitierende Faktoren wurden Konzepte zur Verbesserung bzw. Vermeidung vorgestellt.
The present thesis was born and evolved within the RAdial Velocity Experiment (RAVE) with the goal of measuring chemical abundances from the RAVE spectra and exploit them to investigate the chemical gradients along the plane of the Galaxy to provide constraints on possible Galactic formation scenarios. RAVE is a large spectroscopic survey which aims to observe spectroscopically ~10^6 stars by the end of 2012 and measures their radial velocities, atmospheric parameters and chemical abundances. The project makes use of the UK Schmidt telescope at Australian Astronomical Observatory (AAO) in Siding Spring, Australia, equipped with the multiobject spectrograph 6dF. To date, RAVE collected and measured more than 450,000 spectra. The precision of the chemical abundance estimations depends on the reliability of the atomic and atmosphere parameters adopted (in particular the oscillator strengths of the absorption lines and the effective temperature, gravity, and metallicity of the stars measured). Therefore we first identified 604 absorption lines in the RAVE wavelength range and refined their oscillator strengths with an inverse spectral analysis. Then, we improved the RAVE stellar parameters by modifying the RAVE pipeline and the spectral library the pipeline rely on. The modifications removed some systematic errors in stellar parameters discovered during this work. To obtain chemical abundances, we developed two different processing pipelines. Both of them perform chemical abundances measurements by assuming stellar atmospheres in Local Thermodynamic Equilibrium (LTE). The first one determines elements abundances from equivalent widths of absorption lines. Since this pipeline showed poor sensibility on abundances relative to iron, it has been superseded. The second one exploits the chi^2 minimization technique between observed and model spectra. Thanks to its precision, it has been adopted for the creation of the RAVE chemical catalogue. This pipeline provides abundances with uncertains of about ~0.2dex for spectra with signal-to-noise ratio S/N>40 and ~0.3dex for spectra with 20>S/N>40. For this work, the pipeline measured chemical abundances up to 7 elements for 217,358 RAVE stars. With these data we investigated the chemical gradients along the Galactic radius of the Milky Way. We found that stars with low vertical velocities |W| (which stay close to the Galactic plane) show an iron abundance gradient in agreement with previous works (~-0.07$ dex kpc^-1) whereas stars with larger |W| which are able to reach larger heights above the Galactic plane, show progressively flatter gradients. The gradients of the other elements follow the same trend. This suggests that an efficient radial mixing acts in the Galaxy or that the thick disk formed from homogeneous interstellar matter. In particular, we found hundreds of stars which can be kinetically classified as thick disk stars exhibiting a chemical composition typical of the thin disk. A few stars of this kind have already been detected by other authors, and their origin is still not clear. One possibility is that they are thin disk stars kinematically heated, and then underwent an efficient radial mixing process which blurred (and so flattened) the gradient. Alternatively they may be a transition population" which represents an evolutionary bridge between thin and thick disk. Our analysis shows that the two explanations are not mutually exclusive. Future follow-up high resolution spectroscopic observations will clarify their role in the Galactic disk evolution.
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
The recent discovery of an intricate and nontrivial interaction topology among the elements of a wide range of natural systems has altered the manner we understand complexity. For example, the axonal fibres transmitting electrical information between cortical regions form a network which is neither regular nor completely random. Their structure seems to follow functional principles to balance between segregation (functional specialisation) and integration. Cortical regions are clustered into modules specialised in processing different kinds of information, e.g. visual or auditory. However, in order to generate a global perception of the real world, the brain needs to integrate the distinct types of information. Where this integration happens, nobody knows. We have performed an extensive and detailed graph theoretical analysis of the cortico-cortical organisation in the brain of cats, trying to relate the individual and collective topological properties of the cortical areas to their function. We conclude that the cortex possesses a very rich communication structure, composed of a mixture of parallel and serial processing paths capable of accommodating dynamical processes with a wide variety of time scales. The communication paths between the sensory systems are not random, but largely mediated by a small set of areas. Far from acting as mere transmitters of information, these central areas are densely connected to each other, strongly indicating their functional role as integrators of the multisensory information. In the quest of uncovering the structure-function relationship of cortical networks, the peculiarities of this network have led us to continuously reconsider the stablished graph measures. For example, a normalised formalism to identify the “functional roles” of vertices in networks with community structure is proposed. The tools developed for this purpose open the door to novel community detection techniques which may also characterise the overlap between modules. The concept of integration has been revisited and adapted to the necessities of the network under study. Additionally, analytical and numerical methods have been introduced to facilitate understanding of the complicated statistical interrelations between the distinct network measures. These methods are helpful to construct new significance tests which may help to discriminate the relevant properties of real networks from side-effects of the evolutionary-growth processes.
Der Einfluss der Dynamik auf die stratosphärische Ozonvariabilität über der Arktis im Frühwinter
(2010)
Der frühwinterliche Ozongehalt ist ein Indikator für den Ozongehalt im Spätwinter/Frühjahr. Jedoch weist dieser aufgrund von Absinkprozessen, chemisch bedingten Ozonabbau und Wellenaktivität von Jahr zu Jahr starke Schwankungen auf. Die vorliegende Arbeit zeigt, dass diese Variabilität weitestgehend auf dynamische Prozesse während der Wirbelbildungsphase des arktischen Polarwirbels zurückgeht. Ferner wird der bisher noch ausstehende Zusammenhang zwischen dem früh- und spätwinterlichen Ozongehalt bezüglich Dynamik und Chemie aufgezeigt. Für die Untersuchung des Zusammenhangs zwischen der im Polarwirbel eingeschlossenen Luftmassenzusammensetzung und Ozonmenge wurden Beobachtungsdaten von Satellitenmessinstrumenten und Ozonsonden sowie Modellsimulationen des Lagrangschen Chemie/Transportmodells ATLAS verwandt. Die über die Fläche (45–75°N) und Zeit (August-November) gemittelte Vertikalkomponente des Eliassen-Palm-Flussvektors durch die 100hPa-Fläche zeigt eine Verbindung zwischen der frühwinterlichen wirbelinneren Luftmassenzusammensetzung und der Wirbelbildungsphase auf. Diese ist jedoch nur für die untere Stratosphäre gültig, da die Vertikalkomponente die sich innerhalb der Stratosphäre ändernden Wellenausbreitungsbedingungen nicht erfasst. Für eine verbesserte Höhendarstellung des Signals wurde eine neue integrale auf der Wellenamplitude und dem Charney-Drazin-Kriterium basierende Größe definiert. Diese neue Größe verbindet die Wellenaktivität während der Wirbelbildungsphase sowohl mit der Luftmassenzusammensetzung im Polarwirbel als auch mit der Ozonverteilung über die Breite. Eine verstärkte Wellenaktivität führt zu mehr Luft aus niedrigeren ozonreichen Breiten im Polarwirbel. Aber im Herbst und Frühwinter zerstören chemische Prozesse, die das Ozon ins Gleichgewicht bringen, die interannuale wirbelinnere Ozonvariablität, die durch dynamische Prozesse während der arktischen Polarwirbelbildungsphase hervorgerufen wird. Eine Analyse in Hinblick auf den Fortbestand einer dynamisch induzierten Ozonanomalie bis in den Mittwinter ermöglicht eine Abschätzung des Einflusses dieser dynamischen Prozesse auf den arktischen Ozongehalt. Zu diesem Zweck wurden für den Winter 1999–2000 Modellläufe mit dem Lagrangesche Chemie/Transportmodell ATLAS gerechnet, die detaillierte Informationen über den Erhalt der künstlichen Ozonvariabilität hinsichtlich Zeit, Höhe und Breite liefern. Zusammengefasst, besteht die dynamisch induzierte Ozonvariabilität während der Wirbelbildungsphase länger im Inneren als im Äußeren des Polarwirbels und verliert oberhalb von 750K potentieller Temperatur ihre signifikante Wirkung auf die mittwinterliche Ozonvariabilität. In darunterliegenden Höhenbereichen ist der Anteil an der ursprünglichen Störung groß, bis zu 90% auf der 450K. Innerhalb dieses Höhenbereiches üben die dynamischen Prozesse während der Wirbelbildungsphase einen entscheidenden Einfluss auf den Ozongehalt im Mittwinter aus.
Soft nanocomposites with enhanced electromechanical response for dielectric elastomer actuators
(2011)
Electromechanical transducers based on elastomer capacitors are presently considered for many soft actuation applications, due to their large reversible deformation in response to electric field induced electrostatic pressure. The high operating voltage of such devices is currently a large drawback, hindering their use in applications such as biomedical devices and biomimetic robots, however, they could be improved with a careful design of their material properties. The main targets for improving their properties are increasing the relative permittivity of the active material, while maintaining high electric breakdown strength and low stiffness, which would lead to enhanced electrostatic storage ability and hence, reduced operating voltage. Improvement of the functional properties is possible through the use of nanocomposites. These exploit the high surface-to-volume ratio of the nanoscale filler, resulting in large effects on macroscale properties. This thesis explores several strategies for nanomaterials design. The resulting nanocomposites are fully characterized with respect to their electrical and mechanical properties, by use of dielectric spectroscopy, tensile mechanical analysis, and electric breakdown tests. First, nanocomposites consisting of high permittivity rutile TiO2 nanoparticles dispersed in thermoplastic block copolymer SEBS (poly-styrene-coethylene-co-butylene-co-styrene) are shown to exhibit permittivity increases of up to 3.7 times, leading to 5.6 times improvement in electrostatic energy density, but with a trade-off in mechanical properties (an 8-fold increase in stiffness). The variation in both electrical and mechanical properties still allows for electromechanical improvement, such that a 27 % reduction of the electric field is found compared to the pure elastomer. Second, it is shown that the use of nanofiller conductive particles (carbon black (CB)) can lead to a strong increase of relative permittivity through percolation, however, with detrimental side effects. These are due to localized enhancement of the electric field within the composite, which leads to sharp reductions in electric field strength. Hence, the increase in permittivity does not make up for the reduction in breakdown strength in relation to stored electrical energy, which may prohibit their practical use. Third, a completely new approach for increasing the relative permittivity and electrostatic energy density of a polymer based on 'molecular composites' is presented, relying on chemically grafting soft π-conjugated macromolecules to a flexible elastomer backbone. Polarization caused by charge displacement along the conjugated backbone is found to induce a large and controlled permittivity enhancement (470 % over the elastomer matrix), while chemical bonding, encapsulates the PANI chains manifesting in hardly any reduction in electric breakdown strength, and hence resulting in a large increase in stored electrostatic energy. This is shown to lead to an improvement in the sensitivity of the measured electromechanical response (83 % reduction of the driving electric field) as well as in the maximum actuation strain (250 %). These results represent a large step forward in the understanding of the strategies which can be employed to obtain high permittivity polymer materials with practical use for electro-elastomer actuation.
The Greenland Ice Sheet (GIS) contains enough water volume to raise global sea level by over 7 meters. It is a relic of past glacial climates that could be strongly affected by a warming world. Several studies have been performed to investigate the sensitivity of the ice sheet to changes in climate, but large uncertainties in its long-term response still exist. In this thesis, a new approach has been developed and applied to modeling the GIS response to climate change. The advantages compared to previous approaches are (i) that it can be applied over a wide range of climatic scenarios (both in the deep past and the future), (ii) that it includes the relevant feedback processes between the climate and the ice sheet and (iii) that it is highly computationally efficient, allowing simulations over very long timescales. The new regional energy-moisture balance model (REMBO) has been developed to model the climate and surface mass balance over Greenland and it represents an improvement compared to conventional approaches in modeling present-day conditions. Furthermore, the evolution of the GIS has been simulated over the last glacial cycle using an ensemble of model versions. The model performance has been validated against field observations of the present-day climate and surface mass balance, as well as paleo information from ice cores. The GIS contribution to sea level rise during the last interglacial is estimated to be between 0.5-4.1 m, consistent with previous estimates. The ensemble of model versions has been constrained to those that are consistent with the data, and a range of valid parameter values has been defined, allowing quantification of the uncertainty and sensitivity of the modeling approach. Using the constrained model ensemble, the sensitivity of the GIS to long-term climate change was investigated. It was found that the GIS exhibits hysteresis behavior (i.e., it is multi-stable under certain conditions), and that a temperature threshold exists above which the ice sheet transitions to an essentially ice-free state. The threshold in the global temperature is estimated to be in the range of 1.3-2.3°C above preindustrial conditions, significantly lower than previously believed. The timescale of total melt scales non-linearly with the overshoot above the temperature threshold, such that a 2°C anomaly causes the ice sheet to melt in ca. 50,000 years, but an anomaly of 6°C will melt the ice sheet in less than 4,000 years. The meltback of the ice sheet was found to become irreversible after a fraction of the ice sheet is already lost – but this level of irreversibility also depends on the temperature anomaly.
In der vorliegenden Dissertation wird eine Beschreibung der Phasendynamik irregulärer Oszillationen und deren Wechselwirkungen vorgestellt. Hierbei werden chaotische und stochastische Oszillationen autonomer dissipativer Systeme betrachtet. Für eine Phasenbeschreibung stochastischer Oszillationen müssen zum einen unterschiedliche Werte der Phase zueinander in Beziehung gesetzt werden, um ihre Dynamik unabhängig von der gewählten Parametrisierung der Oszillation beschreiben zu können. Zum anderen müssen für stochastische und chaotische Oszillationen diejenigen Systemzustände identifiziert werden, die sich in der gleichen Phase befinden. Im Rahmen dieser Dissertation werden die Werte der Phase über eine gemittelte Phasengeschwindigkeitsfunktion miteinander in Beziehung gesetzt. Für stochastische Oszillationen sind jedoch verschiedene Definitionen der mittleren Geschwindigkeit möglich. Um die Unterschiede der Geschwindigkeitsdefinitionen besser zu verstehen, werden auf ihrer Basis effektive deterministische Modelle der Oszillationen konstruiert. Hierbei zeigt sich, dass die Modelle unterschiedliche Oszillationseigenschaften, wie z. B. die mittlere Frequenz oder die invariante Wahrscheinlichkeitsverteilung, nachahmen. Je nach Anwendung stellt die effektive Phasengeschwindigkeitsfunktion eines speziellen Modells eine zweckmäßige Phasenbeziehung her. Wie anhand einfacher Beispiele erklärt wird, kann so die Theorie der effektiven Phasendynamik auch kontinuierlich und pulsartig wechselwirkende stochastische Oszillationen beschreiben. Weiterhin wird ein Kriterium für die invariante Identifikation von Zuständen gleicher Phase irregulärer Oszillationen zu sogenannten generalisierten Isophasen beschrieben: Die Zustände einer solchen Isophase sollen in ihrer dynamischen Entwicklung ununterscheidbar werden. Für stochastische Oszillationen wird dieses Kriterium in einem mittleren Sinne interpretiert. Wie anhand von Beispielen demonstriert wird, lassen sich so verschiedene Typen stochastischer Oszillationen in einheitlicher Weise auf eine stochastische Phasendynamik reduzieren. Mit Hilfe eines numerischen Algorithmus zur Schätzung der Isophasen aus Daten wird die Anwendbarkeit der Theorie anhand eines Signals regelmäßiger Atmung gezeigt. Weiterhin zeigt sich, dass das Kriterium der Phasenidentifikation für chaotische Oszillationen nur approximativ erfüllt werden kann. Anhand des Rössleroszillators wird der tiefgreifende Zusammenhang zwischen approximativen Isophasen, chaotischer Phasendiffusion und instabilen periodischen Orbits dargelegt. Gemeinsam ermöglichen die Theorien der effektiven Phasendynamik und der generalisierten Isophasen eine umfassende und einheitliche Phasenbeschreibung irregulärer Oszillationen.
Auf der Grundlage von Sonnenphotometermessungen an drei Messstationen (AWIPEV/ Koldewey in Ny-Ålesund (78.923 °N, 11.923 °O) 1995–2008, 35. Nordpol Driftstation – NP-35 (84.3–85.5 °N, 41.7–56.6 °O) März/April 2008, Sodankylä (67.37 °N, 26.65 °O) 2004–2007) wird die Aerosolvariabilität in der europäischen Arktis und deren Ursachen untersucht. Der Schwerpunkt liegt dabei auf der Frage des Zusammenhanges zwischen den an den Stationen gemessenen Aerosolparametern (Aerosol optische Dicke, Angström Koeffizient, usw.) und dem Transport des Aerosols sowohl auf kurzen Zeitskalen (Tagen) als auch auf langen Zeitskalen (Monate, Jahre). Um diesen Zusammenhang herzustellen, werden für die kurzen Zeitskalen mit dem Trajektorienmodell PEP-Tracer 5-Tage Rückwärtstrajektorien in drei Starthöhen (850 hPa, 700 hPa, 500 hPa) für die Uhrzeiten 00, 06, 12 und 18 Uhr berechnet. Mit Hilfe der nicht-hierarchischen Clustermethode k-means werden die berechneten Rückwärtstrajektorien dann zu Gruppen zusammengefasst und bestimmten Quellgebieten und den gemessenen Aerosol optischen Dicken zugeordnet. Die Zuordnung von Aerosol optischer Dicke und Quellregion ergibt keinen eindeutigen Zusammenhang zwischen dem Transport verschmutzter Luftmassen aus Europa oder Russland bzw. Asien und erhöhter Aerosol optischer Dicke. Dennoch ist für einen konkreten Einzelfall (März 2008) ein direkter Zusammenhang von Aerosoltransport und hohen Aerosol optischen Dicken nachweisbar. In diesem Fall gelangte Waldbrandaerosol aus Südwestrussland in die Arktis und konnte sowohl auf der NP-35 als auch in Ny-Ålesund beobachtet werden. In einem weiteren Schritt wird mit Hilfe der EOF-Analyse untersucht, inwieweit großskalige atmosphärische Zirkulationsmuster für die Aerosolvariabilität in der europäischen Arktis verantwortlich sind. Ähnlich wie bei der Trajektorienanalyse ist auch die Verbindung der atmosphärischen Zirkulation zu den Photometermessungen an den Stationen in der Regel nur schwach ausgeprägt. Eine Ausnahme findet sich bei der Betrachtung des Jahresganges des Bodendruckes und der Aerosol optischen Dicke. Hohe Aerosol optische Dicken treten im Frühjahr zum einen dann auf, wenn durch das Islandtief und das sibirische Hochdruckgebiet Luftmassen aus Europa oder Russland/Asien in die Arktis gelangen, und zum anderen, wenn sich ein kräftiges Hochdruckgebiet über Grönland und weiten Teilen der Arktis befindet. Ebenso zeigt sich, dass der Übergang zwischen Frühjahr und Sommer zumindest teilweise bedingt ist durch denWechsel vom stabilen Polarhoch im Winter und Frühjahr zu einer stärker von Tiefdruckgebieten bestimmten arktischen Atmosphäre im Sommer. Die geringere Aerosolkonzentration im Sommer kann zum Teil mit einer Zunahme der nassen Deposition als Aerosolsenke begründet werden. Für Ny-Ålesund wird neben den Transportmustern auch die chemische Zusammensetzung des Aerosols mit Hilfe von Impaktormessungen an der Zeppelinstation auf dem Zeppelinberg (474m ü.NN) nahe Ny-Ålesund abgeleitet. Dabei ist die positive Korrelation der Aerosoloptischen Dicke mit der Konzentration von Sulfationen und Ruß sehr deutlich. Beide Stoffe gelangen zu einem Großteil durch anthropogene Emissionen in die Atmosphäre. Die damit nachweisbar anthropogen geprägte Zusammensetzung des arktischen Aerosols steht im Widerspruch zum nicht eindeutig herstellbaren Zusammenhang mit dem Transport des Aerosols aus Industrieregionen. Dies kann nur durch einen oder mehrere gleichzeitig stattfindende Transformationsprozesse (z. B. Nukleation von Schwefelsäurepartikeln) während des Transportes aus den Quellregionen (Europa, Russland) erklärt werden.
The presented work describes new concepts of fast switching elements based on principles of photonics. The waveguides working in visible and infra-red ranges are put in a basis of these elements. And as materials for manufacturing of waveguides the transparent polymers, dopped by molecules of the dyes possessing second order nonlinear-optical properties are proposed. The work shows how nonlinear-optical processes in such structures can be implemented by electro-optical and opto-optical control circuit signals. In this paper we consider the complete cycle of fabrication of several types of integral photonic elements. The theoretical analysis of high-intensity beam propagation in media with second-order optical nonlinearity is performed. Quantitative estimations of necessary conditions of occurrence of the nonlinear-optical phenomena of the second order taking into account properties of used materials are made. The paper describes the various stages of manufacture of the basic structure of the integrated photonics: a planar waveguide. Using the finite element method the structure of the electromagnetic field inside the waveguide in different modes was analysed. A separate part of the work deals with the creation of composite organic materials with high optical nonlinearity. Using the methods of quantum chemistry, the dependence of nonlinear properties of dye molecules from its structure were investigated in details. In addition, the paper discusses various methods of inducing of an optical nonlinearity in dye-doping of polymer films. In the work, for the first time is proposed the use of spatial modulation of nonlinear properties of waveguide according Fibonacci law. This allows involving several different nonlinear optical processes simultaneously. The final part of the work describes various designs of integrated optical modulators and switches constructed of organic nonlinear optical waveguides. A practical design of the optical modulator based on Mach-Zehnder interferometer made by a photolithography on polymer film is presented.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
Preparation and investigation of polymer-foam films and polymer-layer systems for ferroelectrets
(2010)
Piezoelectric materials are very useful for applications in sensors and actuators. In addition to traditional ferroelectric ceramics and ferroelectric polymers, ferroelectrets have recently become a new group of piezoelectrics. Ferroelectrets are functional polymer systems for electromechanical transduction, with elastically heterogeneous cellular structures and internal quasi-permanent dipole moments. The piezoelectricity of ferroelectrets stems from linear changes of the dipole moments in response to external mechanical or electrical stress. Over the past two decades, polypropylene (PP) foams have been investigated with the aim of ferroelectret applications, and some products are already on the market. PP-foam ferroelectrets may exhibit piezoelectric d33 coefficients of 600 pC/N and more. Their operating temperature can, however, not be much higher than 60 °C. Recently developed polyethylene-terephthalate (PET) and cyclo-olefin copolymer (COC) foam ferroelectrets show slightly better d33 thermal stabilities, but usually at the price of smaller d33 values. Therefore, the main aim of this work is the development of new thermally stable ferroelectrets with appreciable piezoelectricity. Physical foaming is a promising technique for generating polymer foams from solid films without any pollution or impurity. Supercritical carbon dioxide (CO2) or nitrogen (N2) are usually employed as foaming agents due to their good solubility in several polymers. Polyethylene propylene (PEN) is a polyester with slightly better properties than PET. A “voiding + inflation + stretching” process has been specifically developed to prepare PEN foams. Solid PEN films are saturated with supercritical CO2 at high pressure and then thermally voided at high temperatures. Controlled inflation (Gas-Diffusion Expansion or GDE) is applied in order to adjust the void dimensions. Additional biaxial stretching decreases the void heights, since it is known lens-shaped voids lead to lower elastic moduli and therefore also to stronger piezoelectricity. Both, contact and corona charging are suitable for the electric charging of PEN foams. The light emission from the dielectric-barrier discharges (DBDs) can be clearly observed. Corona charging in a gas of high dielectric strength such as sulfur hexafluoride (SF6) results in higher gas-breakdown strength in the voids and therefore increases the piezoelectricity. PEN foams can exhibit piezoelectric d33 coefficients as high as 500 pC/N. Dielectric-resonance spectra show elastic moduli c33 of 1 − 12 MPa, anti-resonance frequencies of 0.2 − 0.8 MHz, and electromechanical coupling factors of 0.016 − 0.069. As expected, it is found that PEN foams show better thermal stability than PP and PET. Samples charged at room temperature can be utilized up to 80 − 100 °C. Annealing after charging or charging at elevated temperatures may improve thermal stabilities. Samples charged at suitable elevated temperatures show working temperatures as high as 110 − 120 °C. Acoustic measurements at frequencies of 2 Hz − 20 kHz show that PEN foams can be well applied in this frequency range. Fluorinated ethylene-propylene (FEP) copolymers are fluoropolymers with very good physical, chemical and electrical properties. The charge-storage ability of solid FEP films can be significantly improved by adding boron nitride (BN) filler particles. FEP foams are prepared by means of a one-step procedure consisting of CO2 saturation and subsequent in-situ high-temperature voiding. Piezoelectric d33 coefficients up to 40 pC/N are measured on such FEP foams. Mechanical fatigue tests show that the as-prepared PEN and FEP foams are mechanically stable for long periods of time. Although polymer-foam ferroelectrets have a high application potential, their piezoelectric properties strongly depend on the cellular morphology, i.e. on size, shape, and distribution of the voids. On the other hand, controlled preparation of optimized cellular structures is still a technical challenge. Consequently, new ferroelectrets based on polymer-layer system (sandwiches) have been prepared from FEP. By sandwiching an FEP mesh between two solid FEP films and fusing the polymer system with a laser beam, a well-designed uniform macroscopic cellular structure can be formed. Dielectric resonance spectroscopy reveals piezoelectric d33 coefficients as high as 350 pC/N, elastic moduli of about 0.3 MPa, anti-resonance frequencies of about 30 kHz, and electromechanical coupling factors of about 0.05. Samples charged at elevated temperatures show better thermal stabilities than those charged at room temperature, and the higher the charging temperature, the better is the stability. After proper charging at 140 °C, the working temperatures can be as high as 110 − 120 °C. Acoustic measurements at frequencies of 200 Hz − 20 kHz indicate that the FEP layer systems are suitable for applications at least in this range.
The availability of large data sets has allowed researchers to uncover complex properties in complex systems, such as complex networks and human dynamics. A vast number of systems, from the Internet to the brain, power grids, ecosystems, can be represented as large complex networks. Dynamics on and of complex networks has attracted more and more researchers’ interest. In this thesis, first, I introduced a simple but effective dynamical optimization coupling scheme which can realize complete synchronization in networks with undelayed and delayed couplings and enhance the small-world and scale-free networks’ synchronizability. Second, I showed that the robustness of scale-free networks with community structure was enhanced due to the existence of communities in the networks and some of the response patterns were found to coincide with topological communities. My results provide insights into the relationship between network topology and the functional organization in complex networks from another viewpoint. Third, as an important kind of nodes of complex networks, human detailed correspondence dynamics was studied by both data and the model. A new and general type of human correspondence pattern was found and an interacting priority-queues model was introduced to explain it. The model can also embrace a range of realistic social interacting systems such as email and letter communication. My findings provide insight into various human activities both at the individual and network level. Fourth, I present clearly new evidence that human comment behavior in on-line social systems, a different type of interacting human dynamics, is non-Poissonian and a model based on the personal attraction was introduced to explain it. These results are helpful for discovering regular patterns of human behavior in on-line society and the evolution of the public opinion on the virtual as well as real society. Finally, there are conclusion and outlook of human dynamics and complex networks.
Ziel dieser Arbeit ist die Überwindung einer Differenz, die zwischen der Theorie der Phase bzw. der Phasendynamik und ihrer Anwendung in der Zeitreihenanalyse besteht: Während die theoretische Phase eindeutig bestimmt und invariant unter Koordinatentransformationen bzw. gegenüber der jeweils gewählten Observable ist, führen die Standardmethoden zur Abschätzung der Phase aus gegebenen Zeitreihen zu Resultaten, die einerseits von den gewählten Observablen abhängen und so andererseits das jeweilige System keineswegs in eindeutiger und invarianter Weise beschreiben. Um diese Differenz deutlich zu machen, wird die terminologische Unterscheidung von Phase und Protophase eingeführt: Der Terminus Phase wird nur für Variablen verwendet, die dem theoretischen Konzept der Phase entsprechen und daher das jeweilige System in invarianter Weise charakterisieren, während die observablen-abhängigen Abschätzungen der Phase aus Zeitreihen als Protophasen bezeichnet werden. Der zentrale Gegenstand dieser Arbeit ist die Entwicklung einer deterministischen Transformation, die von jeder Protophase eines selbsterhaltenden Oszillators zur eindeutig bestimmten Phase führt. Dies ermöglicht dann die invariante Beschreibung gekoppelter Oszillatoren und ihrer Wechselwirkung. Die Anwendung der Transformation bzw. ihr Effekt wird sowohl an numerischen Beispielen demonstriert - insbesondere wird die Phasentransformation in einem Beispiel auf den Fall von drei gekoppelten Oszillatoren erweitert - als auch an multivariaten Messungen des EKGs, des Pulses und der Atmung, aus denen Phasenmodelle der kardiorespiratorischen Wechselwirkung rekonstruiert werden. Abschließend wird die Phasentransformation für autonome Oszillatoren auf den Fall einer nicht vernachlässigbaren Amplitudenabhängigkeit der Protophase erweitert, was beispielsweise die numerischen Bestimmung der Isochronen des chaotischen Rössler Systems ermöglicht.
This thesis is concerned with the development of numerical methods using finite difference techniques for the discretization of initial value problems (IVPs) and initial boundary value problems (IBVPs) of certain hyperbolic systems which are first order in time and second order in space. This type of system appears in some formulations of Einstein equations, such as ADM, BSSN, NOR, and the generalized harmonic formulation. For IVP, the stability method proposed in [14] is extended from second and fourth order centered schemes, to 2n-order accuracy, including also the case when some first order derivatives are approximated with off-centered finite difference operators (FDO) and dissipation is added to the right-hand sides of the equations. For the model problem of the wave equation, special attention is paid to the analysis of Courant limits and numerical speeds. Although off-centered FDOs have larger truncation errors than centered FDOs, it is shown that in certain situations, off-centering by just one point can be beneficial for the overall accuracy of the numerical scheme. The wave equation is also analyzed in respect to its initial boundary value problem. All three types of boundaries - outflow, inflow and completely inflow that can appear in this case, are investigated. Using the ghost-point method, 2n-accurate (n = 1, 4) numerical prescriptions are prescribed for each type of boundary. The inflow boundary is also approached using the SAT-SBP method. In the end of the thesis, a 1-D variant of BSSN formulation is derived and some of its IBVPs are considered. The boundary procedures, based on the ghost-point method, are intended to preserve the interior 2n-accuracy. Numerical tests show that this is the case if sufficient dissipation is added to the rhs of the equations.
Coupling of the electrical, mechanical and optical response in polymer/liquid-crystal composites
(2010)
Micrometer-sized liquid-crystal (LC) droplets embedded in a polymer matrix may enable optical switching in the composite film through the alignment of the LC director along an external electric field. When a ferroelectric material is used as host polymer, the electric field generated by the piezoelectric effect can orient the director of the LC under an applied mechanical stress, making these materials interesting candidates for piezo-optical devices. In this work, polymer-dispersed liquid crystals (PDLCs) are prepared from poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) and a nematic liquid crystal (LC). The anchoring effect is studied by means of dielectric relaxation spectroscopy. Two dispersion regions are observed in the dielectric spectra of the pure P(VDF-TrFE) film. They are related to the glass transition and to a charge-carrier relaxation, respectively. In PDLC films containing 10 and 60 wt% LC, an additional, bias-field-dependent relaxation peak is found that can be attributed to the motion of LC molecules. Due to the anchoring effect of the LC molecules, this relaxation process is slowed down considerably, when compared with the related process in the pure LC. The electro-optical and piezo-optical behavior of PDLC films containing 10 and 60 wt% LCs is investigated. In addition to the refractive-index mismatch between the polymer matrix and the LC molecules, the interaction between the polymer dipoles and the LC molecules at the droplet interface influences the light-scattering behavior of the PDLC films. For the first time, it was shown that the electric field generated by the application of a mechanical stress may lead to changes in the transmittance of a PDLC film. Such a piezo-optical PDLC material may be useful e.g. in sensing and visualization applications. Compared to a non-polar matrix polymer, the polar matrix polymer exhibits a strong interaction with the LC molecules at the polymer/LC interface which affects the electro-optical effect of the PDLC films and prevents a larger increase in optical transmission.
Due to the unique environmental conditions and different feedback mechanisms, the Arctic region is especially sensitive to climate changes. The influence of clouds on the radiation budget is substantial, but difficult to quantify and parameterize in models. In the framework of the PhD, elastic backscatter and depolarization lidar observations of Arctic clouds were performed during the international Arctic Study of Tropospheric Aerosol, Clouds and Radiation (ASTAR) from Svalbard in March and April 2007. Clouds were probed above the inaccessible Arctic Ocean with a combination of airborne instruments: The Airborne Mobile Aerosol Lidar (AMALi) of the Alfred Wegener Institute for Polar and Marine Research provided information on the vertical and horizontal extent of clouds along the flight track, optical properties (backscatter coefficient), and cloud thermodynamic phase. From the data obtained by the spectral albedometer (University of Mainz), the cloud phase and cloud optical thickness was deduced. Furthermore, in situ observations with the Polar Nephelometer, Cloud Particle Imager and Forward Scattering Spectrometer Probe (Laboratoire de Météorologie Physique, France) provided information on the microphysical properties, cloud particle size and shape, concentration, extinction, liquid and ice water content. In the thesis, a data set of four flights is analyzed and interpreted. The lidar observations served to detect atmospheric structures of interest, which were then probed by in situ technique. With this method, an optically subvisible ice cloud was characterized by the ensemble of instruments (10 April 2007). Radiative transfer simulations based on the lidar, radiation and in situ measurements allowed the calculation of the cloud forcing, amounting to -0.4 W m-2. This slight surface cooling is negligible on a local scale. However, thin Arctic clouds have been reported more frequently in winter time, when the clouds' effect on longwave radiation (a surface warming of 2.8 W m-2) is not balanced by the reduced shortwave radiation (surface cooling). Boundary layer mixed-phase clouds were analyzed for two days (8 and 9 April 2007). The typical structure consisting of a predominantly liquid water layer on cloud top and ice crystals below were confirmed by all instruments. The lidar observations were compared to European Centre for Medium-Range Weather Forecasts (ECMWF) meteorological analyses. A change of air masses along the flight track was evidenced in the airborne data by a small completely glaciated cloud part within the mixed-phase cloud system. This indicates that the updraft necessary for the formation of new cloud droplets at cloud top is disturbed by the mixing processes. The measurements served to quantify the shortcomings of the ECMWF model to describe mixed-phase clouds. As the partitioning of cloud condensate into liquid and ice water is done by a diagnostic equation based on temperature, the cloud structures consisting of a liquid cloud top layer and ice below could not be reproduced correctly. A small amount of liquid water was calculated for the lowest (and warmest) part of the cloud only. Further, the liquid water content was underestimated by an order of magnitude compared to in situ observations. The airborne lidar observations of 9 April 2007 were compared to space borne lidar data on board of the satellite Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO). The systems agreed about the increase of cloud top height along the same flight track. However, during the time delay of 1 h between the lidar measurements, advection and cloud processing took place, and a detailed comparison of small-scale cloud structures was not possible. A double layer cloud at an altitude of 4 km was observed with lidar at the West coast in the direct vicinity of Svalbard (14 April 2007). The cloud system consisted of two geometrically thin liquid cloud layers (each 150 m thick) with ice below each layer. While the upper one was possibly formed by orographic lifting under the influence of westerly winds, or by the vertical wind shear shown by ECMWF analyses, the lower one might be the result of evaporating precipitation out of the upper layer. The existence of ice precipitation between the two layers supports the hypothesis that humidity released from evaporating precipitation was cooled and consequently condensed as it experienced the radiative cooling from the upper layer. In summary, a unique data set characterizing tropospheric Arctic clouds was collected with lidar, in situ and radiation instruments. The joint evaluation with meteorological analyses allowed a detailed insight in cloud properties, cloud evolution processes and radiative effects.
CHAMP (CHAllenging Minisatellite Payload) is a German small satellite mission to study the earth's gravity field, magnetic field and upper atmosphere. Thanks to the good condition of the satellite so far, the planned 5 years mission is extended to year 2009. The satellite provides continuously a large quantity of measurement data for the purpose of Earth study. The measurements of the magnetic field are undertaken by two Fluxgate Magnetometers (vector magnetometer) and one Overhauser Magnetometer (scalar magnetometer) flown on CHAMP. In order to ensure the quality of the data during the whole mission, the calibration of the magnetometers has to be performed routinely in orbit. The scalar magnetometer serves as the magnetic reference and its readings are compared with the readings of the vector magnetometer. The readings of the vector magnetometer are corrected by the parameters that are derived from this comparison, which is called the scalar calibration. In the routine processing, these calibration parameters are updated every 15 days by means of scalar calibration. There are also magnetic effects coming from the satellite which disturb the measurements. Most of them have been characterized during tests before launch. Among them are the remanent magnetization of the spacecraft and fields generated by currents. They are all considered to be constant over the mission life. The 8 years of operation experience allow us to investigate the long-term behaviors of the magnetometers and the satellite systems. According to the investigation, it was found that for example the scale factors of the FGM show obvious long-term changes which can be described by logarithmic functions. The other parameters (offsets and angles between the three components) can be considered constant. If these continuous parameters are applied for the FGM data processing, the disagreement between the OVM and the FGM readings is limited to \pm1nT over the whole mission. This demonstrates, the magnetometers on CHAMP exhibit a very good stability. However, the daily correction of the parameter Z component offset of the FGM improves the agreement between the magnetometers markedly. The Z component offset plays a very important role for the data quality. It exhibits a linear relationship with the standard deviation of the disagreement between the OVM and the FGM readings. After Z offset correction, the errors are limited to \pm0.5nT (equivalent to a standard deviation of 0.2nT). We improved the corrections of the spacecraft field which are not taken into account in the routine processing. Such disturbance field, e.g. from the power supply system of the satellite, show some systematic errors in the FGM data and are misinterpreted in 9-parameter calibration, which brings false local time related variation of the calibration parameters. These corrections are made by applying a mathematical model to the measured currents. This non-linear model is derived from an inversion technique. If the disturbance field of the satellite body are fully corrected, the standard deviation of scalar error \triangle B remains about 0.1nT. Additionally, in order to keep the OVM readings a reliable standard, the imperfect coefficients of the torquer current correction for the OVM are redetermined by solving a minimization problem. The temporal variation of the spacecraft remanent field is investigated. It was found that the average magnetic moment of the magneto-torquers reflects well the moment of the satellite. This allows for a continuous correction of the spacecraft field. The reasons for the possible unknown systemic error are discussed in this thesis. Particularly, both temperature uncertainties and time errors have influence on the FGM data. Based on the results of this thesis the data processing of future magnetic missions can be designed in an improved way. In particular, the upcoming ESA mission Swarm can take advantage of our findings and provide all the auxiliary measurements needed for a proper recovery of the ambient magnetic field.
Central stars of planetary nebulae are low-mass stars on the brink of their final evolution towards white dwarfs. Because of their surface temperature of above 25,000 K their UV radiation ionizes the surrounding material, which was ejected in an earlier phase of their evolution. Such fluorescent circumstellar gas is called a "Planetary Nebula". About one-tenth of the Galactic central stars are hydrogen-deficient. Generally, the surface of these central stars is a mixture of helium, carbon, and oxygen resulting from partial helium burning. Moreover, most of them have a strong stellar wind, similar to massive Pop-I Wolf-Rayet stars, and are in analogy classified as [WC]. The brackets distinguish the special type from the massive WC stars. Qualitative spectral analyses of [WC] stars lead to the assumption of an evolutionary sequence from the cooler, so-called late-type [WCL] stars to the very hot, early-type [WCE] stars. Quantitative analyses of the winds of [WC] stars became possible by means of computer programs that solve the radiative transfer in the co-moving frame, together with the statistical equilibrium equations for the population numbers. First analyses employing models without iron-line blanketing resulted in systematically different abundances for [WCL] and [WCE] stars. While the mass ratio of He:C is roughly 40:50 for [WCL] stars, it is 60:30 in average for [WCE] stars. The postulated evolution from [WCL] to [WCE] however could only lead to an increase of carbon, since heavier elements are built up by nuclear fusion. In the present work, improved models are used to re-analyze the [WCE] stars and to confirm their He:C abundance ratio. Refined models, calculated with the Potsdam WR model atmosphere code (PoWR), account now for line-blanketing due to iron group elements, small scale wind inhomogeneities, and complex model atoms for He, C, O, H, P, N, and Ne. Referring to stellar evolutionary models for the hydrogen-deficient [WC] stars, Ne and N abundances are of particular interest. Only one out of three different evolutionary channels, the VLTP scenario, leads to a Ne and N overabundance of a few percent by mass. A VLTP, a very late thermal pulse, is a rapid increase of the energy production of the helium-burning shell, while hydrogen burning has already ceased. Subsequently, the hydrogen envelope is mixed with deeper layers and completely burnt in the presence of C, He, and O. This results in the formation of N and Ne. A sample of eleven [WCE] stars has been analyzed. For three of them, PB 6, NGC 5189, and [S71d]3, a N overabundance of 1.5% has been found, while for three other [WCE] stars such high abundances of N can be excluded. In the case of NGC 5189, strong spectral lines of Ne can be reproduced qualitatively by our models. At present, the Ne mass fraction can only be roughly estimated from the Ne emission lines and seems to be in the order of a few percent by mass. Furthermore, using a diagnostic He-C line pair, the He:C abundance ratio of 60:30 for [WCE] stars is confirmed. Within the framework of the analysis, a new class of hydrogen-deficient central stars has been discovered, with PB 8 as its first member. Its atmospheric mixture resembles rather that of the massive WNL stars than of the [WC] stars. The determined mass fractions H:He:C:N:O are 40:55:1.3:2:1.3. As the wind of PB 8 contains significant amounts of O and C, in contrast to WN stars, a classification as [WN/WC] is suggested.
A huge number of applications require coherent radiation in the visible spectral range. Since diode lasers are very compact and efficient light sources, there exists a great interest to cover these applications with diode laser emission. Despite modern band gap engineering not all wavelengths can be accessed with diode laser radiation. Especially in the visible spectral range between 480 nm and 630 nm no emission from diode lasers is available, yet. Nonlinear frequency conversion of near-infrared radiation is a common way to generate coherent emission in the visible spectral range. However, radiation with extraordinary spatial temporal and spectral quality is required to pump frequency conversion. Broad area (BA) diode lasers are reliable high power light sources in the near-infrared spectral range. They belong to the most efficient coherent light sources with electro-optical efficiencies of more than 70%. Standard BA lasers are not suitable as pump lasers for frequency conversion because of their poor beam quality and spectral properties. For this purpose, tapered lasers and diode lasers with Bragg gratings are utilized. However, these new diode laser structures demand for additional manufacturing and assembling steps that makes their processing challenging and expensive. An alternative to BA diode lasers is the stripe-array architecture. The emitting area of a stripe-array diode laser is comparable to a BA device and the manufacturing of these arrays requires only one additional process step. Such a stripe-array consists of several narrow striped emitters realized with close proximity. Due to the overlap of the fields of neighboring emitters or the presence of leaky waves, a strong coupling between the emitters exists. As a consequence, the emission of such an array is characterized by a so called supermode. However, for the free running stripe-array mode competition between several supermodes occurs because of the lack of wavelength stabilization. This leads to power fluctuations, spectral instabilities and poor beam quality. Thus, it was necessary to study the emission properties of those stripe-arrays to find new concepts to realize an external synchronization of the emitters. The aim was to achieve stable longitudinal and transversal single mode operation with high output powers giving a brightness sufficient for efficient nonlinear frequency conversion. For this purpose a comprehensive analysis of the stripe-array devices was done here. The physical effects that are the origin of the emission characteristics were investigated theoretically and experimentally. In this context numerical models could be verified and extended. A good agreement between simulation and experiment was observed. One way to stabilize a specific supermode of an array is to operate it in an external cavity. Based on mathematical simulations and experimental work, it was possible to design novel external cavities to select a specific supermode and stabilize all emitters of the array at the same wavelength. This resulted in stable emission with 1 W output power, a narrow bandwidth in the range of 2 MHz and a very good beam quality with M²<1.5. This is a new level of brightness and brilliance compared to other BA and stripe-array diode laser systems. The emission from this external cavity diode laser (ECDL) satisfied the requirements for nonlinear frequency conversion. Furthermore, a huge improvement to existing concepts was made. In the next step newly available periodically poled crystals were used for second harmonic generation (SHG) in single pass setups. With the stripe-array ECDL as pump source, more than 140 mW of coherent radiation at 488 nm could be generated with a very high opto-optical conversion efficiency. The generated blue light had very good transversal and longitudinal properties and could be used to generate biphotons by parametric down-conversion. This was feasible because of the improvement made with the infrared stripe-array diode lasers due to the development of new physical concepts.
After the epoch of reionisation the intergalactic medium (IGM) is kept at a high photoionisation level by the cosmic UV background radiation field. Primarily composed of the integrated contribution of quasars and young star forming galaxies, its intensity is subject to spatial and temporal fluctuations. In particular in the vicinity of luminous quasars, the UV radiation intensity grows by several orders of magnitude. Due to an enhanced UV radiation up to a few Mpc from the quasar, the ionised hydrogen fraction significantly increases and becomes visible as a reduced level of absorption in the HI Lyman alpha (Ly-alpha) forest. This phenomenon is known as the proximity effect and it is the main focus of this thesis. Modelling the influence on the IGM of the quasar radiation, one is able to determine the UV background intensity at a specific frequency (J_nu_0), or equivalently, its photoionisation rate (Gamma_b). This is of crucial importance for both theoretical and observational cosmology. Thus far, the proximity effect has been investigated primarily by combining the signal of large samples of quasars, as it has been regarded as a statistical phenomenon. Only a handful of studies tried to measure its signature on individual lines of sight, albeit focusing on one sight line only. Our aim is to perform a systematic investigation of large samples of quasars searching for the signature of the proximity effect, with a particular emphasis on its detection on individual lines of sight. We begin this survey with a sample of 40 high resolution (R~45000), high signal to noise ratio (S/N~70) quasar spectra at redshift 2.1<z<4.7, publicly available in the European Southern Observatory (ESO) archive. The extraordinary quality of this data set enables us to detect the proximity effect signature not only in the combined quasar sample, but also along each individual sight line. This allows us to determine not only the UV background intensity at the mean redshift of this sample, but also to estimate its intensity in small (Delta z~0.2) redshift intervals in the range 2<z<4. Our estimates (J_nu_0~ 3x10^{-22} erg s^{-1} cm^{-2} Hz^{-1} sr^{-1}) are for the first time in very good agreement with different constraints of its evolution obtained from theoretical predictions and numerical simulations. We continue this systematic analysis of the proximity effect with the largest search to date invoking the Sloan Digital Sky Survey (SDSS) data set. The sample consists of 1733 quasars at redshifts z>2.3. In spite of the low resolution and limited S/N we detect the proximity effect on about 98\% of the quasars at a high significance level. Thereby we are able to determine the evolution of the UV background photoionisation rate within the redshift range 2<z<5 finding Gamma_b~ 1.6x10^{-12} s^{-1}. With these new measurements we explore literature estimates of the quasar luminosity function and predict the stellar luminosity density up to redshift of about z~5. Our results are globally in good agreement with recent determinations inferred from deep surveys of high redshift galaxies. We then compare our measurements on the UV background photoionisation rate inferred from the two samples at high and low resolution. While these data sets show extreme differences, our determinations are in considerable agreement at z<3.3, even though they show less agreement at higher redshifts. We suspect that this may be caused by either the small number of high resolution quasar spectra at the highest redshifts considered or by some systematic effect due to the limited data quality of SDSS. Complementary to the observational investigation of the proximity effect on high redshift quasars, we exploit some theoretical aspects linked to and based on the results on this phenomenon. We employ complex numerical simulations of structure formation to achieve a better representation of the Ly-alpha forest. Modelling the signature of the proximity effect on randomly selected sight lines, we prove the advantages of dealing with individual lines of sight instead of combining their signal to investigate this phenomenon. Furthermore, we develop and test novel techniques aimed at a more precise determination of the proximity effect signal. With this investigation we demonstrate that the technique developed and employed in this thesis is the most accurate adopted thus far. Tighter determinations of the UV background are certainly based on suitable methods to detect its signature, but also on a deeper understanding of the environments in which quasars form and evolve. We initiate an investigation of complex numerical simulations including the radiative transport of energy to model in a more detailed way the proximity effect. Such a simulation may lead to the characterisation of the quasar environment based on the comparison between the observed and simulated statistical properties of the proximity effect signature.
The presented thesis describes the observations of the Galactic center Quintuplet cluster, the spectral analysis of the cluster Wolf-Rayet stars of the nitrogen sequence to determine their fundamental stellar parameters, and discusses the obtained results in a general context. The Quintuplet cluster was discovered in one of the first infrared surveys of the Galactic center region (Okuda et al. 1987, 1989) and was observed for this project with the ESO-VLT near-infrared integral field instrument SINFONI-SPIFFI. The subsequent data reduction was performed in parts with a self-written pipeline to obtain flux-calibrated spectra of all objects detected in the imaged field of view. First results of the observation were compiled and published in a spectral catalog of 160 flux-calibrated $K$-band spectra in the range of 1.95 to 2.45\,$\mu$m, containing 85 early-type (OB) stars, 62 late-type (KM) stars, and 13 Wolf-Rayet stars. About 100 of these stars are cataloged for the first time. The main part of the thesis project was concentrated on the analysis of the WR stars of the nitrogen sequence and one further identified emission line star (Of/WN) with tailored Potsdam Wolf-Rayet (PoWR) models for expanding atmospheres (Hamann et al. 1995) that are applied to derive the stellar parameters of these stars. For this purpose, the atomic input data of the PoWR models had to be extended by further line transitions in the near-infrared spectral range to enable adaequate model spectra to be calculated. These models were then fitted to the observed spectra, revealing typical paramters for this class of stars. A significant amount of hydrogen of up to $X_\text{H} \sim 0.2$ by mass fraction is still present in their stellar atmospheres. The stars are also found to be very luminous ($\log{(L/L_\odot)} > 6.0$) and show mass-loss rates and wind characteristics typical for radiation-driven winds. By comparison with stellar evolutionary models (Meynet \& Maeder 2003a; Langer et al. 1994), the initial masses were estimated and indicate that the Quintuplet WN stars are descendants from the most massive O stars with $M_\text{init} > 60 M_\odot$ and their ages correspond to a cluster age of 3-5\,million years. The analysis of the individual WN stars revealed an average extinction of $A_K =3.1 \pm 0.5$\,mag ($A_V = 27 \pm 4$) towards the Quintuplet cluster. This extinction was applied to derive the stellar luminosities of the remaining early-type and late-type stars in the catalog and a Hertzsprung-Russell diagram could be compiled. Surprisingly, two stellar populations are found, a group of main sequence OB stars and a group of evolved late-type stars, i.e. red supergiants (RSG). The main sequence stars indicate a cluster age of 4 million years, which would be too young for red supergiants to be already present. A star formation event lasting for a few million years might possibly explain the Quintuplet's population and the cluster would still be considered coeval. However, the unexpected and simultaneous presence of red supergiants and Wolf-Rayet stars in the cluster points out that the details of star formation and cluster evolution are not yet well understood for the Quintuplet cluster.
We study buckling instabilities of filaments in biological systems. Filaments in a cell are the building blocks of the cytoskeleton. They are responsible for the mechanical stability of cells and play an important role in intracellular transport by molecular motors, which transport cargo such as organelles along cytoskeletal filaments. Filaments of the cytoskeleton are semiflexible polymers, i.e., their bending energy is comparable to the thermal energy such that they can be viewed as elastic rods on the nanometer scale, which exhibit pronounced thermal fluctuations. Like macroscopic elastic rods, filaments can undergo a mechanical buckling instability under a compressive load. In the first part of the thesis, we study how this buckling instability is affected by the pronounced thermal fluctuations of the filaments. In cells, compressive loads on filaments can be generated by molecular motors. This happens, for example, during cell division in the mitotic spindle. In the second part of the thesis, we investigate how the stochastic nature of such motor-generated forces influences the buckling behavior of filaments. In chapter 2 we review briefly the buckling instability problem of rods on the macroscopic scale and introduce an analytical model for buckling of filaments or elastic rods in two spatial dimensions in the presence of thermal fluctuations. We present an analytical treatment of the buckling instability in the presence of thermal fluctuations based on a renormalization-like procedure in terms of the non-linear sigma model where we integrate out short-wavelength fluctuations in order to obtain an effective theory for the mode of the longest wavelength governing the buckling instability. We calculate the resulting shift of the critical force by fluctuation effects and find that, in two spatial dimensions, thermal fluctuations increase this force. Furthermore, in the buckled state, thermal fluctuations lead to an increase in the mean projected length of the filament in the force direction. As a function of the contour length, the mean projected length exhibits a cusp at the buckling instability, which becomes rounded by thermal fluctuations. Our main result is the observation that a buckled filament is stretched by thermal fluctuations, i.e., its mean projected length in the direction of the applied force increases by thermal fluctuations. Our analytical results are confirmed by Monte Carlo simulations for buckling of semiflexible filaments in two spatial dimensions. We also perform Monte Carlo simulations in higher spatial dimensions and show that the increase in projected length by thermal fluctuations is less pronounced than in two dimensions and strongly depends on the choice of the boundary conditions. In the second part of this work, we present a model for buckling of semiflexible filaments under the action of molecular motors. We investigate a system in which a group of motors moves along a clamped filament carrying a second filament as a cargo. The cargo-filament is pushed against the wall and eventually buckles. The force-generating motors can stochastically unbind and rebind to the filament during the buckling process. We formulate a stochastic model of this system and calculate the mean first passage time for the unbinding of all linking motors which corresponds to the transition back to the unbuckled state of the cargo filament in a mean-field model. Our results show that for sufficiently short microtubules the movement of kinesin-I-motors is affected by the load force generated by the cargo filament. Our predictions could be tested in future experiments.
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
Classical semiconductor physics has been continuously improving electronic components such as diodes, light-emitting diodes, solar cells and transistors based on highly purified inorganic crystals over the past decades. Organic semiconductors, notably polymeric, are a comparatively young field of research, the first light-emitting diode based on conjugated polymers having been demonstrated in 1990. Polymeric semiconductors are of tremendous interest for high-volume, low-cost manufacturing ("printed electronics"). Due to their rather simple device structure mostly comprising only one or two functional layers, polymeric diodes are much more difficult to optimize compared to small-molecular organic devices. Usually, functions such as charge injection and transport are handled by the same material which thus needs to be highly optimized. The present work contributes to expanding the knowledge on the physical mechanisms determining device performance by analyzing the role of charge injection and transport on device efficiency for blue and white-emitting devices, based on commercially relevant spiro-linked polyfluorene derivatives. It is shown that such polymers can act as very efficient electron conductors and that interface effects such as charge trapping play the key role in determining the overall device efficiency. This work contributes to the knowledge of how charges drift through the polymer layer to finally find neutral emissive trap states and thus allows a quantitative prediction of the emission color of multichromophoric systems, compatible with the observed color shifts upon driving voltage and temperature variation as well as with electrical conditioning effects. In a more methodically oriented part, it is demonstrated that the transient device emission observed upon terminating the driving voltage can be used to monitor the decay of geminately-bound species as well as to determine trapped charge densities. This enables direct comparisons with numerical simulations based on the known properties of charge injection, transport and recombination. The method of charge extraction under linear increasing voltages (CELIV) is investigated in some detail, correcting for errors in the published approach and highlighting the role of non-idealized conditions typically present in experiments. An improved method is suggested to determine the field dependence of charge mobility in a more accurate way. Finally, it is shown that the neglect of charge recombination has led to a misunderstanding of experimental results in terms of a time-dependent mobility relaxation.
In normal everyday viewing, we perform large eye movements (saccades) and miniature or fixational eye movements. Most of our visual perception occurs while we are fixating. However, our eyes are perpetually in motion. Properties of these fixational eye movements, which are partly controlled by the brainstem, change depending on the task and the visual conditions. Currently, fixational eye movements are poorly understood because they serve the two contradictory functions of gaze stabilization and counteraction of retinal fatigue. In this dissertation, we investigate the spatial and temporal properties of time series of eye position acquired from participants staring at a tiny fixation dot or at a completely dark screen (with the instruction to fixate a remembered stimulus); these time series were acquired with high spatial and temporal resolution. First, we suggest an advanced algorithm to separate the slow phases (named drift) and fast phases (named microsaccades) of these movements, which are considered to play different roles in perception. On the basis of this identification, we investigate and compare the temporal scaling properties of the complete time series and those time series where the microsaccades are removed. For the time series obtained during fixations on a stimulus, we were able to show that they deviate from Brownian motion. On short time scales, eye movements are governed by persistent behavior and on a longer time scales, by anti-persistent behavior. The crossover point between these two regimes remains unchanged by the removal of microsaccades but is different in the horizontal and the vertical components of the eyes. Other analyses target the properties of the microsaccades, e.g., the rate and amplitude distributions, and we investigate, whether microsaccades are triggered dynamically, as a result of earlier events in the drift, or completely randomly. The results obtained from using a simple box-count measure contradict the hypothesis of a purely random generation of microsaccades (Poisson process). Second, we set up a model for the slow part of the fixational eye movements. The model is based on a delayed random walk approach within the velocity related equation, which allows us to use the data to determine control loop durations; these durations appear to be different for the vertical and horizontal components of the eye movements. The model is also motivated by the known physiological representation of saccade generation; the difference between horizontal and vertical components concurs with the spatially separated representation of saccade generating regions. Furthermore, the control loop durations in the model suggest an external feedback loop for the horizontal but not for the vertical component, which is consistent with the fact that an internal feedback loop in the neurophysiology has only been identified for the vertical component. Finally, we confirmed the scaling properties of the model by semi-analytical calculations. In conclusion, we were able to identify several properties of the different parts of fixational eye movements and propose a model approach that is in accordance with the described neurophysiology and described limitations of fixational eye movement control.
The Sun is a star, which due to its proximity has a tremendous influence on Earth. Since its very first days mankind tried to "understand the Sun", and especially in the 20th century science has uncovered many of the Sun's secrets by using high resolution observations and describing the Sun by means of models. As an active star the Sun's activity, as expressed in its magnetic cycle, is closely related to the sunspot numbers. Flares play a special role, because they release large energies on very short time scales. They are correlated with enhanced electromagnetic emissions all over the spectrum. Furthermore, flares are sources of energetic particles. Hard X-ray observations (e.g., by NASA's RHESSI spacecraft) reveal that a large fraction of the energy released during a flare is transferred into the kinetic energy of electrons. However the mechanism that accelerates a large number of electrons to high energies (beyond 20 keV) within fractions of a second is not understood yet. The thesis at hand presents a model for the generation of energetic electrons during flares that explains the electron acceleration based on real parameters obtained by real ground and space based observations. According to this model photospheric plasma flows build up electric potentials in the active regions in the photosphere. Usually these electric potentials are associated with electric currents closed within the photosphere. However as a result of magnetic reconnection, a magnetic connection between the regions of different magnetic polarity on the photosphere can establish through the corona. Due to the significantly higher electric conductivity in the corona, the photospheric electric power supply can be closed via the corona. Subsequently a high electric current is formed, which leads to the generation of hard X-ray radiation in the dense chromosphere. The previously described idea is modelled and investigated by means of electric circuits. For this the microscopic plasma parameters, the magnetic field geometry and hard X-ray observations are used to obtain parameters for modelling macroscopic electric components, such as electric resistors, which are connected with each other. This model demonstrates that such a coronal electric current is correlated with large scale electric fields, which can accelerate the electrons quickly up to relativistic energies. The results of these calculations are encouraging. The electron fluxes predicted by the model are in agreement with the electron fluxes deduced from the measured photon fluxes. Additionally the model developed in this thesis proposes a new way to understand the observed double footpoint hard X-ray sources.
Giant vesicles may contain several spatial compartments formed by phase separation within their enclosed aqueous solution. This phenomenon might be related to molecular crowding, fractionation and protein sorting in cells. To elucidate this process we used two chemically dissimilar polymers, polyethylene glycol (PEG) and dextran, encapsulated in giant vesicles. The dynamics of the phase separation of this polymer solution enclosed in vesicles is studied by concentration quench, i.e. exposing the vesicles to hypertonic solutions. The excess membrane area, produced by dehydration, can either form tubular structures (also known as tethers) or be utilized to perform morphological changes of the vesicle, depending on the interfacial tension between the coexisting phases and those between the membrane and the two phases. Membrane tube formation is coupled to the phase separation process. Apparently, the energy released from the phase separation is utilized to overcome the energy barrier for tube formation. The tubes may be absorbed at the interface to form a 2-demensional structure. The membrane stored in the form of tubes can be retracted under small tension perturbation. Furthermore, a wetting transition, which has been reported only in a few experimental systems, was discovered in this system. By increasing the polymer concentration, the PEG-rich phase changed from complete wetting to partial wetting of the membrane. If sufficient excess membrane area is available in the vesicle where both phases wet the membrane, one of the phases will bud off from the vesicle body, which leads to the separation of the two phases. This wetting-induced budding is governed by the surface energy and modulated by the membrane tension. This was demonstrated by micropipette aspiration experiments on vesicles encapsulating two phases. The budding of one phase can significantly decrease the surface energy by decreasing the contact area between the coexisting phases. The elasticity of the membrane allows it to adjust its tension automatically to balance the pulling force exerted by the interfacial tension of the two liquid phases at the three-phase contact line. The budding of the phase enriched with one polymer may be relevant to the selective protein transportation among lumens by means of vesicle in cells.
Supernovae are known to be the dominant energy source for driving turbulence in the interstellar medium. Yet, their effect on magnetic field amplification in spiral galaxies is still poorly understood. Analytical models based on the uncorrelated-ensemble approach predicted that any created field will be expelled from the disk before a significant amplification can occur. By means of direct simulations of supernova-driven turbulence, we demonstrate that this is not the case. Accounting for vertical stratification and galactic differential rotation, we find an exponential amplification of the mean field on timescales of 100Myr. The self-consistent numerical verification of such a “fast dynamo” is highly beneficial in explaining the observed strong magnetic fields in young galaxies. We, furthermore, highlight the importance of rotation in the generation of helicity by showing that a similar mechanism based on Cartesian shear does not lead to a sustained amplification of the mean magnetic field. This finding impressively confirms the classical picture of a dynamo based on cyclonic turbulence.
The aim of this thesis is to achieve a deep understanding of the working mechanism of polymer based solar cells and to improve the device performance. Two types of the polymer based solar cells are studied here: all-polymer solar cells comprising macromolecular donors and acceptors based on poly(p-phenylene vinylene) and hybrid cells comprising a PPV copolymer in combination with a novel small molecule electron acceptor. To understand the interplay between morphology and photovoltaic properties in all-polymer devices, I compared the photocurrent characteristics and excited state properties of bilayer and blend devices with different nano-morphology, which was fine tuned by using solvents with different boiling points. The main conclusion from these complementary measurements was that the performance-limiting step is the field-dependent generation of free charge carriers, while bimolecular recombination and charge extraction do not compromise device performance. These findings imply that the proper design of the donor-acceptor heterojunction is of major importance towards the goal of high photovoltaic efficiencies. Regarding polymer-small molecular hybrid solar cells I combined the hole-transporting polymer M3EH-PPV with a novel Vinazene-based electron acceptor. This molecule can be either deposited from solution or by thermal evaporation, allowing for a large variety of layer architectures to be realized. I then demonstrated that the layer architecture has a large influence on the photovoltaic properties. Solar cells with very high fill factors of up to 57 % and an open circuit voltage of 1V could be achieved by realizing a sharp and well-defined donor-acceptor heterojunction. In the past, fill factors exceeding 50 % have only been observed for polymers in combination with soluble fullerene-derivatives or nanocrystalline inorganic semiconductors as the electron-accepting component. The finding that proper processing of polymer-vinazene devices leads to similar high values is a major step towards the design of efficient polymer-based solar cells.
Microfabricated solid-state surfaces, also called atom chip', have become a well-established technique to trap and manipulate atoms. This has simplified applications in atom interferometry, quantum information processing, and studies of many-body systems. Magnetic trapping potentials with arbitrary geommetries are generated with atom chip by miniaturized current-carrying conductors integrated on a solid substrate. Atoms can be trapped and cooled to microKelvin and even nanoKelvin temperatures in such microchip trap. However, cold atoms can be significantly perturbed by the chip surface, typically held at room temperature. The magnetic field fluctuations generated by thermal currents in the chip elements may induce spin flips of atoms and result in loss, heating and decoherence. In this thesis, we extend previous work on spin flip rates induced by magnetic noise and consider the more complex geometries that are typically encountered in atom chips: layered structures and metallic wires of finite cross-section. We also discuss a few aspects of atom chips traps built with superconducting structures that have been suggested as a means to suppress magnetic field fluctuations. The thesis describes calculations of spin flip rates based on magnetic Green functions that are computed analytically and numerically. For a chip with a top metallic layer, the magnetic noise depends essentially on the thickness of that layer, as long as the layers below have a much smaller conductivity. Based on this result, scaling laws for loss rates above a thin metallic layer are derived. A good agreement with experiments is obtained in the regime where the atom-surface distance is comparable to the skin depth of metal. Since in the experiments, metallic layers are always etched to separate wires carrying different currents, the impact of the finite lateral wire size on the magnetic noise has been taken into account. The local spectrum of the magnetic field near a metallic microstructure has been investigated numerically with the help of boundary integral equations. The magnetic noise significantly depends on polarizations above flat wires with finite lateral width, in stark contrast to an infinitely wide wire. Correlations between multiple wires are also taken into account. In the last part, superconducting atom chips are considered. Magnetic traps generated by superconducting wires in the Meissner state and the mixed state are studied analytically by a conformal mapping method and also numerically. The properties of the traps created by superconducting wires are investigated and compared to normal conducting wires: they behave qualitatively quite similar and open a route to further trap miniaturization, due to the advantage of low magnetic noise. We discuss critical currents and fields for several geometries.
Die Arbeit beschreibt die Analyse von Beobachtungen zweier Sonnenflecken in zweidimensionaler Spektro-Polarimetrie. Die Daten wurden mit dem Fabry-Pérot-Interferometer der Universität Göttingen am Vakuum-Turm-Teleskop auf Teneriffa erfasst. Von der aktiven Region NOAA 9516 wurde der volle Stokes-Vektor des polarisierten Lichts in der Absorptionslinie bei 630,249 nm in Einzelaufnahmen beobachtet, und von der aktiven Region NOAA 9036 wurde bei 617,3 nm Wellenlänge eine 90-minütige Zeitserie des zirkular polarisierten Lichts aufgezeichnet. Aus den reduzierten Daten werden Ergebniswerte für Intensität, Geschwindigkeit in Beobachtungsrichtung, magnetische Feldstärke sowie verschiedene weitere Plasmaparameter abgeleitet. Mehrere Ansätze zur Inversion solarer Modellatmosphären werden angewendet und verglichen. Die teilweise erheblichen Fehlereinflüsse werden ausführlich diskutiert. Das Frequenzverhalten der Ergebnisse und Abhängigkeiten nach Ort und Zeit werden mit Hilfe der Fourier- und Wavelet-Transformation weiter analysiert. Als Resultat lässt sich die Existenz eines hochfrequenten Bandes für Geschwindigkeitsoszillationen mit einer zentralen Frequenz von 75 Sekunden (13 mHz) bestätigen. In größeren photosphärischen Höhen von etwa 500 km entstammt die Mehrheit der damit zusammenhängenden Schockwellen den dunklen Anteilen der Granulen, im Unterschied zu anderen Frequenzbereichen. Die 75-Sekunden-Oszillationen werden ebenfalls in der aktiven Region beobachtet, vor allem in der Lichtbrücke. In den identifizierten Bändern oszillatorischer Power der Geschwindigkeit sind in einer dunklen, penumbralen Struktur sowie in der Lichtbrücke ausgeprägte Strukturen erkennbar, die sich mit einer Horizontalgeschwindigkeit von 5-8 km/s in die ruhige Sonne bewegen. Diese zeigen einen deutlichen Anstieg der Power, vor allem im 5-Minuten-Band, und stehen möglicherweise in Zusammenhang mit dem Phänomen der „Evershed-clouds“. Eingeschränkt durch ein sehr geringes Signal-Rausch-Verhältnis und hohe Fehlereinflüsse werden auch Magnetfeldvariationen mit einer Periode von sechs Minuten am Übergang von Umbra zu Penumbra in der Nähe einer Lichtbrücke beobachtet. Um die beschriebenen Resultate zu erzielen, wurden bestehende Visualisierungsverfahren der Frequenzanalyse verbessert oder neu entwickelt, insbesondere für Ergebnisse der Wavelet-Transformation.
This thesis describes two main projects; the first one is the optimization of a hierarchical search strategy to search for unknown pulsars. This project is divided into two parts; the first part (and the main part) is the semi-coherent hierarchical optimization strategy. The second part is a coherent hierarchical optimization strategy which can be used in a project like Einstein@Home. In both strategies we have found that the 3-stages search is the optimum strategy to search for unknown pulsars. For the second project we have developed a computer software for a coherent Multi-IFO (Interferometer Observatory) search. To validate our software, we have worked on simulated data as well as hardware injected signals of pulsars in the fourth LIGO science run (S4). While with the current sensitivity of our detectors we do not expect to detect any true Gravitational Wave signals in our data, we can still set upper limits on the strength of the gravitational waves signals. These upper limits, in fact, tell us how weak a signal strength we would detect. We have also used our software to set upper limits on the signal strength of known isolated pulsars using LIGO fifth science run (S5) data.
The intergalactic medium is kept highly photoionised by the intergalactic UV background radiation field generated by the overall population of quasars and galaxies. In the vicinity of sources of UV photons, such as luminous high-redshift quasars, the UV radiation field is enhanced due to the local source contribution. The higher degree of ionisation is visible as a reduced line density or generally as a decreased level of absorption in the Lyman alpha forest of neutral hydrogen. This so-called proximity effect has been detected with high statistical significance towards luminous quasars. If quasars radiate rather isotropically, background quasar sightlines located near foreground quasars should show a region of decreased Lyman alpha absorption close to the foreground quasar. Despite considerable effort, such a transverse proximity effect has only been detected in a few cases. So far, studies of the transverse proximity effect were mostly limited by the small number of suitable projected pairs or groups of high-redshift quasars. With the aim to substantially increase the number of quasar groups in the vicinity of bright quasars we conduct a targeted survey for faint quasars around 18 well-studied quasars at employing slitless spectroscopy. Among the reduced and calibrated slitless spectra of 29000 objects on a total area of 4.39 square degrees we discover in total 169 previously unknown quasar candidates based on their prominent emission lines. 81 potential z>1.7 quasars are selected for confirmation by slit spectroscopy at the Very Large Telescope (VLT). We are able to confirm 80 of these. 64 of the newly discovered quasars reside at z>1.7. The high success rate of the follow-up observations implies that the majority of the remaining candidates are quasars as well. In 16 of these groups we search for a transverse proximity effect as a systematic underdensity in the HI Lyman alpha absorption. We employ a novel technique to characterise the random absorption fluctuations in the forest in order to estimate the significance of the transverse proximity effect. Neither low-resolution spectra nor high-resolution spectra of background quasars of our groups present evidence for a transverse proximity effect. However, via Monte Carlo simulations the effect should be detectable only at the 1-2sigma level near three of the foreground quasars. Thus, we cannot distinguish between the presence or absence of a weak signature of the transverse proximity effect. The systematic effects of quasar variability, quasar anisotopy and intrinsic overdensities near quasars likely explain the apparent lack of the transverse proximity effect. Even in absence of the systematic effects, we show that a statistically significant detection of the transverse proximity effect requires at least 5 medium-resolution quasar spectra of background quasars near foreground quasars whose UV flux exceeds the UV background by a factor 3. Therefore, statistical studies of the transverse proximity effect require large numbers of suitable pairs. Two sightlines towards the central quasars of our survey fields show intergalactic HeII Lyman alpha absorption. A comparison of the HeII absorption to the corresponding HI absorption yields an estimate of the spectral shape of the intergalactic UV radiation field, typically parameterised by the HeII/HI column density ratio eta. We analyse the fluctuating UV spectral shape on both lines of sight and correlate it with seven foreground quasars. On the line of sight towards Q0302-003 we find a harder radiation field near 4 foreground quasars. In the direct vicinity of the quasars eta is consistent with values of 25-100, whereas at large distances from the quasars eta>200 is required. The second line of sight towards HE2347-4342 probes lower redshifts where eta is directly measurable in the resolved HeII forest. Again we find that the radiation field near the 3 foreground quasars is significantly harder than in general. While eta still shows large fluctuations near the quasars, probably due to radiative transfer, the radiation field is on average harder near the quasars than far away from them. We interpret these discoveries as the first detections of the transverse proximity effect as a local hardness fluctuation in the UV spectral shape. No significant HI proximity effect is predicted for the 7 foreground quasars. In fact, the HI absorption near the quasars is close to or slightly above the average, suggesting that the weak signature of the transverse proximity effect is masked by intrinsic overdensities. However, we show that the UV spectral shape traces the transverse proximity effect even in overdense regions or at large distances. Therefore, the spectral hardness is a sensitive physical measure of the transverse proximity effect that is able to break the density degeneracy affecting the traditional searches.
In biological cells, the long-range intracellular traffic is powered by molecular motors which transport various cargos along microtubule filaments. The microtubules possess an intrinsic direction, having a 'plus' and a 'minus' end. Some molecular motors such as cytoplasmic dynein walk to the minus end, while others such as conventional kinesin walk to the plus end. Cells typically have an isopolar microtubule network. This is most pronounced in neuronal axons or fungal hyphae. In these long and thin tubular protrusions, the microtubules are arranged parallel to the tube axis with the minus ends pointing to the cell body and the plus ends pointing to the tip. In such a tubular compartment, transport by only one motor type leads to 'motor traffic jams'. Kinesin-driven cargos accumulate at the tip, while dynein-driven cargos accumulate near the cell body. We identify the relevant length scales and characterize the jamming behaviour in these tube geometries by using both Monte Carlo simulations and analytical calculations. A possible solution to this jamming problem is to transport cargos with a team of plus and a team of minus motors simultaneously, so that they can travel bidirectionally, as observed in cells. The presumably simplest mechanism for such bidirectional transport is provided by a 'tug-of-war' between the two motor teams which is governed by mechanical motor interactions only. We develop a stochastic tug-of-war model and study it with numerical and analytical calculations. We find a surprisingly complex cooperative motility behaviour. We compare our results to the available experimental data, which we reproduce qualitatively and quantitatively.
In the present dissertation paper an approach which ensures an efficient control of such diverse systems as noisy or chaotic oscillators and neural ensembles is developed. This approach is implemented by a simple linear feedback loop. The dissertation paper consists of two main parts. One part of the work is dedicated to the application of the suggested technique to a population of neurons with a goal to suppress their synchronous collective dynamics. The other part is aimed at investigating linear feedback control of coherence of a noisy or chaotic self-sustained oscillator. First we start with a problem of suppressing synchronization in a large population of interacting neurons. The importance of this task is based on the hypothesis that emergence of pathological brain activity in the case of Parkinson's disease and other neurological disorders is caused by synchrony of many thousands of neurons. The established therapy for the patients with such disorders is a permanent high-frequency electrical stimulation via the depth microelectrodes, called Deep Brain Stimulation (DBS). In spite of efficiency of such stimulation, it has several side effects and mechanisms underlying DBS remain unclear. In the present work an efficient and simple control technique is suggested. It is designed to ensure suppression of synchrony in a neural ensemble by a minimized stimulation that vanishes as soon as the tremor is suppressed. This vanishing-stimulation technique would be a useful tool of experimental neuroscience; on the other hand, control of collective dynamics in a large population of units represents an interesting physical problem. The main idea of suggested approach is related to the classical problem of oscillation theory, namely the interaction between a self-sustained (active) oscillator and a passive load (resonator). It is known that under certain conditions the passive oscillator can suppress the oscillations of an active one. In this thesis a much more complicated case of active medium, which itself consists of thousands of oscillators is considered. Coupling this medium to a specially designed passive oscillator, one can control the collective motion of the ensemble, specifically can enhance or suppress it. Having in mind a possible application in neuroscience, the problem of suppression is concentrated upon. Second, the efficiency of suggested suppression scheme is illustrated by considering more complex case, i.e. when the population of neurons generating the undesired rhythm consists of two non-overlapping subpopulations: the first one is affected by the stimulation, while the collective activity is registered from the second one. Generally speaking, the second population can be by itself both active and passive; both cases are considered here. The possible applications of suggested technique are discussed. Third, the influence of the external linear feedback on coherence of a noisy or chaotic self-sustained oscillator is considered. Coherence is one of the main properties of self-oscillating systems and plays a key role in the construction of clocks, electronic generators, lasers, etc. The coherence of a noisy limit cycle oscillator in the context of phase dynamics is evaluated by the phase diffusion constant, which is in its turn proportional to the width of the spectral peak of oscillations. Many chaotic oscillators can be described within the framework of phase dynamics, and, therefore, their coherence can be also quantified by the way of the phase diffusion constant. The analytical theory for a general linear feedback, considering noisy systems in the linear and Gaussian approximation is developed and validated by numerical results.
The mammalian brain is, with its numerous neural elements and structured complex connectivity, one of the most complex systems in nature. Recently, large-scale corticocortical connectivities, both structural and functional, have received a great deal of research attention, especially using the approach of complex networks. Here, we try to shed some light on the relationship between structural and functional connectivities by studying synchronization dynamics in a realistic anatomical network of cat cortical connectivity. We model the cortical areas by a subnetwork of interacting excitable neurons (multilevel model) and by a neural mass model (population model). With weak couplings, the multilevel model displays biologically plausible dynamics and the synchronization patterns reveal a hierarchical cluster organization in the network structure. We can identify a group of brain areas involved in multifunctional tasks by comparing the dynamical clusters to the topological communities of the network. With strong couplings of multilevel model and by using neural mass model, the dynamics are characterized by well-defined oscillations. The synchronization patterns are mainly determined by the node intensity (total input strengths of a node); the detailed network topology is of secondary importance. The biologically improved multilevel model exhibits similar dynamical patterns in the two regimes. Thus, the study of synchronization in a multilevel complex network model of cortex can provide insights into the relationship between network topology and functional organization of complex brain networks.
Wasserdampf in der Stratosphäre und Troposphäre ist eines der wichtigsten atmosphärischen Treibhausgase. Neben seiner Bedeutung für das Klima hat es großen Einfluss auf die Bildung von polaren stratosphärischen Wolken sowie auf die atmosphärische Chemie. Weltweit erstmalig soll innerhalb eines Forscherverbundes in Deutschland ein leistungsstarkes, mobiles, abtastendes Wasserdampf-DIAL zur dreidimensional hochaufgelösten Messung des atmosphärischen Wasserdampfs entwickelt werden. Mit dem Wasserdampf-DIAL können Wasserdampfkonzentrationen in der Atmosphäre mit hoher zeitlicher und räumlicher Auflösung gemessen werden. Das DIAL basiert auf einem Titan-Saphir-Laser oder einem dazu alternativen OPO-Laser (optisch parametrischer Oszillator). Der für das optische Pumpen dieser Laser nötige Pumplaser wurde im Rahmen dieser Arbeit in der Arbeitsgruppe Nichtlineare Optik des Instituts für Physik der Universität Potsdam entwickelt. Ein hochauflösendes, mobiles DIAL erfordert einen Pumplaser mit großen Pulsenergien, guter Strahlqualität und einer hohen Effizienz. Um diese Ziele zu erreichen, wurde ein MOPA-System (Master Oscillator Power Amplifier) mit Frequenzstabilisierung auf der Basis von doppelbrechungskompensierten, transversal diodengepumpten Laserstäben entwickelt und untersucht. Auf dem Weg dahin wurden unterschiedliche Realisierungsmöglichkeiten des MOPA-Systems geprüft. Im Rahmen dessen wurden die Festkörperlasermaterialien Yb:YAG [1], kerndotierte Nd:YAG-Keramik [2] und herkömmliches Nd:YAG vorgestellt und hinsichtlich ihrer Eignung für dieses MOPA-System untersucht. Nachdem die Entscheidung für Nd:YAG als laseraktives Material gefallen war, konnte darauf aufbauend die Konzeptionierung des Lasersystems auf der Basis von Verstärkungsrechnungen vorgenommen werden. Die entwickelte Verstärkungsrechnung trägt den Tatbeständen von realen Systemen Rechnung, indem radiusabhängige Intensitäten und eine radiale, nicht homogene Inversionsdichte berücksichtigt werden. Die Frequenzstabilisierung des gepulsten Oszillators (Frequenzstabilität von 1 MHz) wurde mittels des Pound-Drever-Hall-Verfahrens vorgenommen. Mit der Heterodynmethode wird die Frequenzstabilität des Oszillators gemessen. Nach Untersuchungen über verschiedene Konfigurationen für lineare und ringförmige Oszillatoren, wurde ein Ringoszillator mit zwei Laserköpfen aufgebaut, in welchen von außen mit einem Laser fester Frequenz eingestrahlt wird. Dieser emittiert bei einer Wiederholrate von 400 Hz eine Pulsenergie von Eout = 21 mJ bei nahezu beugungsbegrenzter Strahlqualität (M2 < 1,2). Die Verstärkung dieser Laserpulse erfolgte zunächst durch eine Vorverstärkerstufe und anschließend durch zwei doppelbrechungskompensierte Hauptverstärker im Doppeldurchgang. Eine gute Strahlqualität (M2 = 1,75) konnte unter anderem erzielt werden, indem der Doppeldurchgang durch die Hauptverstärker mit einem phasenkonjugierenden Spiegel (SF6), auf der Basis der stimulierten Brillouin Streuung, realisiert wurde. Der entwickelte Laser emittiert Pulse mit einer Länge von 25 ns und einer Energie von 250 mJ. Insgesamt wurde ein bisher einmaliges Lasersystem entwickelt. In der Literatur sind die erreichte Frequenzstabilität, Strahlqualität und Leistung in dieser Kombination bisher nicht dokumentiert. In der Zukunft soll durch den Einsatz von kerndotierten, keramischen Lasermaterialien, höheren Pumpleistungen der Hauptverstärker und phasenkonjugierenden Spiegeln aus Quarz die Pulsenergie des Systems weiter erhöht werden. [1] M. Ostermeyer, A. Straesser, “Theoretical investigation of Yb:YAG as laser material for nanosecond pulse emission with large energies in the joule range”, Optics Communications, Vol. 274, pp. 422-428 (2007) [2] A. Sträßer and M. Ostermeyer, “Improving the brightness of side pumped power amplifiers by using core doped ceramic rods”, Optics Express, Vol. 14, pp. 6687- 6693 (2006)
The interaction between neuronal cells can be identified as the computing mechanism of the brain. Neurons are complex cells that do not operate in isolation, but they are organized in a highly connected network structure. There is experimental evidence that groups of neurons dynamically synchronize their activity and process brain functions at all levels of complexity. A fundamental step to prove this hypothesis is to analyze large sets of single neurons recorded in parallel. Techniques to obtain these data are meanwhile available, but advancements are needed in the pre-processing of the large volumes of acquired data and in data analysis techniques. Major issues include extracting the signal of single neurons from the noisy recordings (referred to as spike sorting) and assessing the significance of the synchrony. This dissertation addresses these issues with two complementary strategies, both founded on the manipulation of point processes under rigorous analytical control. On the one hand I modeled the effect of spike sorting errors on correlated spike trains by corrupting them with realistic failures, and studied the corresponding impact on correlation analysis. The results show that correlations between multiple parallel spike trains are severely affected by spike sorting, especially by erroneously missing spikes. When this happens sorting strategies characterized by classifying only good'' spikes (conservative strategies) lead to less accurate results than tolerant'' strategies. On the other hand, I investigated the effectiveness of methods for assessing significance that create surrogate data by displacing spikes around their original position (referred to as dithering). I provide analytical expressions of the probability of coincidence detection after dithering. The effectiveness of spike dithering in creating surrogate data strongly depends on the dithering method and on the method of counting coincidences. Closed-form expressions and bounds are derived for the case where the dither equals the allowed coincidence interval. This work provides new insights into the methodologies of identifying synchrony in large-scale neuronal recordings, and of assessing its significance.
Atmospheric circulation and the surface mass balance in a regional climate model of Antarctica
(2007)
Understanding the Earth's climate system and particularly climate variability presents one of the most difficult and urgent challenges in science. The Antarctic plays a crucial role in the global climate system, since it is the principal region of radiative energy deficit and atmospheric cooling. An assessment of regional climate model HIRHAM is presented. The simulations are generated with the HIRHAM model, which is modified for Antarctic applications. With a horizontal resolution of 55km, the model has been run for the period 1958-1998 creating long-term simulations from initial and boundary conditions provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA40 re-analysis. The model output is compared with observations from observation stations, upper air data, global atmospheric analyses and satellite data. In comparison with the observations, the evaluation shows that the simulations with the HIRHAM model capture both the large and regional scale circulation features with generally small bias in the modeled variables. On the annual time scale the largest errors in the model simulations are the overestimation total cloud cover and the colder near-surface temperature over the interior of the Antarctic plateau. The low-level temperature inversion as well as low-level wind jet is well captured by the model. The decadal scale processes were studied based on trend calculations. The long-term run was divided into two 20 years parts. The 2m temperature, 500 hPa temperature, MSLP, precipitation and net mass balance trends were calculated for both periods and over 1958 - 1998. During the last two decades the strong surface cooling was observed over the Eastern Antarctica, this result is in good agreement with the result of Chapman and Walsh (2005) who calculated the temperature trend based on the observational data. The MSLP trend reveals a big disparity between the first and second parts of the 40 year run. The overall trend shows the strengthening of the circumpolar vortex and continental anticyclone. The net mass balance as well as precipitation show a positive trend over the Antarctic Peninsula region, along Wilkes Land and in Dronning Maud Land. The Antarctic ice sheet grows over the Eastern part of Antarctica with small exceptions in Dronning Maud Land and Wilkes Land and sinks in the Antarctic Peninsula; this result is in good agreement with the satellite-measured altitude presented in Davis (2005) . To better understand the horizontal structure of MSLP, temperature and net mass balance trends the influence of the Southern Annual Mode (SAM) on the Antarctic climate was investigated. The main meteorological parameters during the positive and negative Antarctic Oscillation (AAO) phases were compared to each other. A positive/negative AAO index means strengthening/weakening of the circumpolar vortex, poleward/northward storm tracks and prevailing/weakening westerly winds. For detailed investigation of global teleconnection, two positive and one negative periods of AAO phase were chosen. The differences in MSLP and 2m temperature between positive and negative AAO years during the winter months partly explain the surface cooling during the last decades.
Electron transfer phenomena in proteins represent one of the most common types of biochemical reactions. They play a central role in energy conversion pathways in living cells, and are crucial components in respiration and photosynthesis. These complex biochemical reaction cascades consist of a series of proteins and protein complexes that couple a charge transfer to different forms of chemical energy. The efficiency and sophisticated optimisation of signal transfer in these natural redox chains has inspired engineering of artificial architectures mimicking essential properties of their natural analogues. Implementation of direct electron transfer (DET) in protein assemblies was a breakthrough in bioelectronics, providing a simple and efficient way for coupling biological recognition events to a signal transducer. DET avoids the use of redox mediators, reducing potential interferences and side reactions, as well as being more compatible with in vivo conditions. However, only a few haem proteins, including the redox protein cytochrome c (cyt.c), and blue copper enzymes show efficient DET on different kinds of electrodes. Previous investigations with cyt.c have mainly focused on heterogeneous electron transfer of monolayers of this protein on gold. An important advance was the fabrication of cyt.c multilayers by electrostatic layer-by-layer self-assembly. The ease of fabrication, the stability, and the controllable permeability of polyelectrolyte multilayers have made them particularly attractive for electroanalytical applications. With cyt.c and sulfonated polyaniline it was for the first time possible that fully electro-active multilayers of the redox protein could be prepared. This approach was extended to design an analytical signal chain based on multilayers of cyt.c and xanthine oxidase (XOD). The system does not need an external mediator but relies on an in situ generation of a mediating radical and thus allows a signal transfer from hypoxanthine via the substrate converting enzyme and cyt.c to the electrode. Another kind of a signal chain is based on assembling proteins in complexes on electrodes in such a way that a direct protein-protein electron transfer becomes feasible. This design does not need a redox mediator in analogy to natural protein communication. For this purpose, cyt.c and the enzyme bilirubin oxidase (BOD, EC 1.3.3.5) are co-immobilized in a self-assembled polyelectrolyte multilayer on gold electrodes. Although these two proteins are not natural reaction partners, the protein architecture facilitates an electron transfer from the electrode via multiple protein layers to molecular oxygen resulting in a significant catalytic reduction current. Finally, we describe a novel strategy for multi-protein layer-by-layer self-assembly combining cyt.c with an enzyme sulfite oxidase (SOx) without use of any additional polymer. Electrostatic interactions between these two proteins with rather separated pI values during the assembly process from a low ionic strength buffer were found sufficient for the layer-by-layer deposition of the both biomolecules. It is anticipated that the concepts described in this work will stimulate further progress in multilayer design of even more complex biomimetic signal cascades taking advantage of direct communication between proteins.
Giacconi et al. (1962) discovered a diffuse cosmic X-ray background with rocket experiments when they searched for lunar X-ray emission. Later satellite missions found a spectral peak in the cosmic X-ray background at ~30 keV. Imaging X-ray satellites such as ROSAT (1990-1999) were able to resolve up to 80% of the background below 2 keV into single point sources, mainly active galaxies. The cosmic X-ray background is the integration of all accreting super-massive (several million solar masses) black holes in the centre of active galaxies over cosmic time. Synthesis models need further populations of X-ray absorbed active galaxy nuclei (AGN) in order to explain the cosmic X-ray background peak at ~30 keV. Current X-ray missions such as XMM-Newton and Chandra offer the possibility of studying these additional populations. This Ph.D. thesis studies the populations that dominate the X-ray sky. For this purpose the 120 ksec XMM-Newton Marano field survey, named for an earlier optical quasar survey in the southern hemisphere, is analysed. Based on the optical follow-up observations the X-ray sources are spectroscopically classified. Optical and X-ray properties of the different X-ray source populations are studied and differences are derived. The amount of absorption in the X-ray spectra of type II AGN, which are considered as a main contributor to the X-ray background at ~30 keV, is determined. In order to extend the sample size of the rare type II AGN, this study also includes objects from another survey, the XMM-Newton Serendipitous Medium Sample. In addition, the dependence of the absorption in type II AGN with redshift and X-ray luminosity is analysed. We detected 328 X-ray sources in the Marano field. 140 sources were spectroscopically classified. We found 89 type I AGN, 36 type II AGN, 6 galaxies, and 9 stars. AGN, galaxies, and stars are clearly distinguishable by their optical and X-ray properties. Type I and II AGN do not separate clearly. They have a significant overlap in all studied properties. In a few cases the X-ray properties are in contradiction to the observed optical properties for type I and type II AGN. For example we find type II AGN that show evidence for optical absorption but are not absorbed in X-rays. Based on the additional use of near infra-red imaging (K-band), we were able to identify several of the rare type II AGN. The X-ray spectra of type II AGN from the XMM-Newton Marano field survey and the XMM-Newton Serendipitous Medium Sample were analysed. Since most of the sources have only ~40 X-ray counts in the XMM-Newton PN-detector, I carefully studied the fit results of simulated X-ray spectra as a function of fit statistic and binning method. The objects revealed only moderate absorption. In particular, I do not find any Compton-thick sources (absorbed by column densities of NH > 1.5 x 10^24 cm^−2). This gives evidence that type II AGN are not the main contributor of the X-ray background around 30 keV. Although bias effects may occur, type II AGN show no noticeable trend of the amount of absorption with redshift or X-ray luminosity.
Im Rahmen der vorliegenden Arbeit ist es erstmals gelungen, mit einem ps-Pumplaser (10 ps) Weißlicht mit einer spektralen Breite von mehr als einer optischen Oktave in einer mikrostrukturierten Faser (MSF) bei einer Pumpwellenlänge von 1064 nm zu generieren. Es ließ sich, abgesehen von nichtkonvertierten Resten der Pumpstrahlung, ein unstrukturiertes und zeitlich stabiles Weißlichtspektrum von 700 nm bis 1650 nm generieren. Die maximale Ausgangsleistung dieser Weißlichtstrahlung betrug 3,1 W. Es konnten sehr gute Einkoppeleffizienzen von maximal 62 % erzielt werden. Die an der Weißlichterzeugung beteiligten dispersiven und nichtlinear optischen Effekte, wie z.B. Selbstphasenmodulation, Vierwellenmischung, Modulationsinstabilitäten oder Solitoneneffekte, werden detailliert theoretisch untersucht und erläutert. Die Arbeit beinhaltet ebenfalls eine umfangreiche Beschreibung der Wirkungsweise und Eigenschaften von mikrostrukturierten Fasern mit einem festen Faserkern. Aufgrund der großen Variationsvielfalt des mikrostrukturierten Fasermantels und der damit verbundenen Wellenleitereigenschaften ergeben sich, insbesondere für die Anwendung in der nichtlinearen Optik, eine Reihe von interessanten Eigenschaften. Es wurden insgesamt vier verschiedene mikrostrukturierte Fasern experimentell untersucht. Für die Interpretation der experimentellen Ergebnisse ist die Pulsausbreitung der ps-Pumppulse in einer dispersiven, nichtlinear optischen Faser anhand der verallgemeinerten nichtlinearen Schrödinger-Gleichung berechnet worden. Durch einen Vergleich der Berechnungen mit den Messdaten ließen sich verstärkte Modulationsinstabilitäten und verschiedene Solitoneneffekte als hauptsächlich für die Weißlichterzeugung bei ps-Anregungspulsen verantwortlich identifizieren. Auf der Basis der durchgeführten Untersuchungen wurde in Kooperation mit der Fa. Jenoptik Laser, Optik, Systeme GmbH eine kompakte und leistungsstarke Weißlichtquelle entwickelt. Diese wurde erfolgreich in einer Kohärenztomographiemessung (Optical Coherence Tomography - OCT) getestet: Es konnte in ex vivo-Untersuchungen gezeigt werden, dass sich mit dieser ps-Weißlichtquelle eine hohe Eindringtiefe von ca. 400 µm in die Netzhaut eines Affen erreichen lässt.
In dieser Arbeit wurde die Variabilität der Atmosphäre in einem neuen gekoppelten Klimamodell (ECHO-GiSP) untersucht, welches eine vereinfachte Stratosphärenchemie (bis 80 km Höhe) enthält. Es wurden 2 Simulationen über 150 Jahre durchgeführt. In einer der Simulationen wurde die atmosphärische Chemie modelliert, hatte aber keinen Einfluß auf die Dynamik des Klimamodelles. In der zweiten Simulation wurde hingegen die Wirkung der Chemie auf die Klimadynamik explizit berücksichtigt, die über die Strahlungsbilanz des Modelles erfolgt. Dies ist die erste Langzeitsimulation mit einem voll gekoppelten globalen Klimamodell mit interaktiver Chemie. Die Simulation mit rückgekoppelter Chemie zeigt eine Abschwächung des atmosphärischen Variabilitätsmusters der Arktischen Oszillation (AO). Zudem kommt es in der Troposphäre zu einer Reduzierung der mittleren Windgeschwindigkeiten der gemäßigten Breiten aufgrund verringerter Temperaturgegensätze zwischen den Tropen und den Polargebieten. Auch in der Stratosphäre ergibt sich eine Abschwächung und Erwärmung des Polarwirbels. Diese Auswirkungen der Kopplung zwischen der atmosphärischen Chemie und der Dynamik des Klimamodelles sind eine wichtige Erkenntnis, da in früheren Klimasimulationen die Variabilität der AO oft zu stark ausgeprägt war. In der Stratosphäre reduziert sich infolge des abgeschwächten Polarwirbels auch die großräumige Zirkulation zwischen den beiden Hemisphären der Erde. In der Troposphäre werden hingegen die allgemeine Zirkulation, und damit auch die subtropischen Strahlströme des Windes verstärkt. Zudem kommt es in den Tropen zu Temperaturänderungen durch stratosphärische Ozonschwankungen in Abhängigkeit von der AO. Allgemein verändert sich die Kopplung zwischen Troposphäre und Stratosphäre, einschließlich des durch die Anregung von langen atmosphärischen Wellen erfolgenden vertikalen Energieübertrages aus der Troposphäre in die Stratosphäre.
In this work, some new results to exploit the recurrence properties of quasiperiodic dynamical systems are presented by means of a two dimensional visualization technique, Recurrence Plots(RPs). Quasiperiodicity is the simplest form of dynamics exhibiting nontrivial recurrences, which are common in many nonlinear systems. The concept of recurrence was introduced to study the restricted three body problem and it is very useful for the characterization of nonlinear systems. I have analyzed in detail the recurrence patterns of systems with quasiperiodic dynamics both analytically and numerically. Based on a theoretical analysis, I have proposed a new procedure to distinguish quasiperiodic dynamics from chaos. This algorithm is particular useful in the analysis of short time series. Furthermore, this approach demonstrates to be efficient in recognizing regular and chaotic trajectories of dynamical systems with mixed phase space. Regarding the application to real situations, I have shown the capability and validity of this method by analyzing time series from fluid experiments.
In der vorliegenden Arbeit werden Methoden der Erdsystemanalyse auf die Untersuchung der Habitabilität terrestrischer Exoplaneten angewandt. Mit Hilfe eines parametrisierten Konvektionsmodells für die Erde wird die thermische Evolution von terrestrischen Planeten berechnet. Bei zunehmender Leuchtkraft des Zentralsterns wird über den globalen Karbonat-Silikat-Kreislauf das planetare Klima stabilisiert. Für eine photosynthetisch-aktive Biosphäre, die in einem bestimmten Temperaturbereich bei hinreichender CO2-Konzentration existieren kann, wird eine Überlebenspanne abgeschätzt. Der Abstandsbereich um einen Stern, in dem eine solche Biosphäre produktiv ist, wird als photosynthetisch-aktive habitable Zone (pHZ) definiert und berechnet. Der Zeitpunkt, zu dem die pHZ in einem extrasolaren Planetensystem endgültig verschwindet, ist die maximale Lebenspanne der Biosphäre. Für Supererden, massereiche terrestrische Planeten, ist sie umso länger, je massereicher der Planet ist und umso kürzer, je mehr er mit Kontinenten bedeckt ist. Für Supererden, die keine ausgeprägten Wasser- oder Landwelten sind, skaliert die maximale Lebenspanne mit der Planetenmasse mit einem Exponenten von 0,14. Um K- und M-Sterne ist die Überlebensspanne einer Biosphäre auf einem Planeten immer durch die maximale Lebensspanne bestimmt und nicht durch das Ende der Hauptreihenentwicklung des Zentralsterns limitiert. Das pHZ-Konzept wird auf das extrasolare Planetensystem Gliese 581 angewandt. Danach könnte die 8-Erdmassen-Supererde Gliese 581d habitabel sein. Basierend auf dem vorgestellten pHZ-Konzept wird erstmals die von Ward und Brownlee 1999 aufgestellte Rare-Earth-Hypothese für die Milchstraße quantifiziert. Diese Hypothese besagt, dass komplexes Leben im Universum vermutlich sehr selten ist, wohingegen primitives Leben weit verbreitet sein könnte. Unterschiedliche Temperatur- und CO2-Toleranzen sowie ein unterschiedlicher Einfluss auf die Verwitterung für komplexe und primitive Lebensformen führt zu unterschiedlichen Grenzen der pHZ und zu einer unterschiedlichen Abschätzung für die Anzahl der Planeten, die mit den entsprechenden Lebensformen besiedelt sein könnten. Dabei ergibt sich, dass komplex besiedelte Planeten heute etwa 100-mal seltener sein müssten als primitiv besiedelte.
The biological function and the technological applications of semiflexible polymers, such as DNA, actin filaments and carbon nanotubes, strongly depend on their rigidity. Semiflexible polymers are characterized by their persistence length, the definition of which is the subject of the first part of this thesis. Attractive interactions, that arise e.g.~in the adsorption, the condensation and the bundling of filaments, can change the conformation of a semiflexible polymer. The conformation depends on the relative magnitude of the material parameters and can be influenced by them in a systematic manner. In particular, the morphologies of semiflexible polymer rings, such as circular nanotubes or DNA, which are adsorbed onto substrates with three types of structures, are studied: (i) A topographical channel, (ii) a chemically modified stripe and (iii) a periodic pattern of topographical steps. The results are compared with the condensation of rings by attractive interactions. Furthermore, the bundling of two individual actin filaments, whose ends are anchored, is analyzed. This system geometry is shown to provide a systematic and quantitative method to extract the magnitude of the attraction between the filaments from experimentally observable conformations of the filaments.
Magnetorotational instability (MRI) is one of the most important and most common instabilities in astrophysics. Today it is widely accepted that it serves as a major source of turbulent viscosity in accretion disks, the most energy efficient objects in the universe. The importance of the MRI for astrophysics has been realized only in recent fifteen years. However, originally it was discovered much earlier, in 1959, in a very different context. Theoretical flow of a conducting liquid confined between differentially rotating cylinders in the presence of an external magnetic field was analyzed. The central conclusion is that the additional magnetic field parallel to the axis of rotation can destabilize otherwise stable flow. Theory of non-magnetized fluid motion between rotating cylinders has much longer history, though. It has been studied already in 1888 and today such setup is usually referred as a Taylor-Couette flow. To prove experimentally the existence of MRI in a magnetized Taylor-Couette flow is a demanding task and different MHD groups around the world try to achieve it. The main problem lies in the fact that laboratory liquid metals which are used in such experiments are characterized by small magnetic Prandtl number. Consequently rotation rates of the cylinders must be extremely large and vast amount of technical problems emerge. One of the most important difficulties is an influence of plates enclosing the cylinders in any experiment. For fast rotation the plates tend to dominate the whole flow and the MRI can not be observed. In this thesis we discuss a special helical configuration of the applied magnetic field which allows the critical rotation rates to be much smaller. If only the axial magnetic field is present, the cylinders must rotate with angular velocities corresponding to Reynolds numbers of order Re ≈ 10^6. With the helical field this number is dramatically reduced to Re ≈ 10^3. The azimuthal component of the magnetic field can be easily generated by letting an electric current through the axis of rotation, In a Taylor-Couette flow the (primary) instability manifests itself as Taylor vortices. The specific geometry of the helical magnetic field leads to a traveling wave solution and the vortices are drifting in a direction determined by rotation and the magnetic field. In an idealized study for infinitely long cylinders this is not a problem. However, if the cylinders have finite length and are bounded vertically by the plates the situation is different. In this dissertation it is shown, with use of numerical methods, that the traveling wave solution also exists for MHD Taylor-Couette flow at finite aspect ratio H/D, H being height of the cylinders, D width of the gap between them. The nonlinear simulations provide amplitudes of fluid velocity which are helpful in designing an experiment. Although the plates disturb the flow, parameters like the drift velocity indicate that the helical MRI operates in this case. The idea of the helical MRI was implemented in a very recent experiment PROMISE. The results provided, for the first time, an evidence that the (helical) MRI indeed exists. Nevertheless, the influence of the vertical endplates was evident and the experiment can be, in principle, improved. Exemplary methods of reduction of the end-effect are here proposed. Near the vertical boundaries develops an Ekman-Hartmann layer. Study of this layer for the MHD Taylor-Couette system as well as its impact on the global flow properties is presented. It is shown that the plates, especially if they are conducting, can disturb the flow far more then previously thought also for relatively slow rotation rates.
This work is concerned with the spatio-temporal structures that emerge when non-identical, diffusively coupled oscillators synchronize. It contains analytical results and their confirmation through extensive computer simulations. We use the Kuramoto model which reduces general oscillatory systems to phase dynamics. The symmetry of the coupling plays an important role for the formation of patterns. We have studied the ordering influence of an asymmetry (non-isochronicity) in the phase coupling function on the phase profile in synchronization and the intricate interplay between this asymmetry and the frequency heterogeneity in the system. The thesis is divided into three main parts. Chapter 2 and 3 introduce the basic model of Kuramoto and conditions for stable synchronization. In Chapter 4 we characterize the phase profiles in synchronization for various special cases and in an exponential approximation of the phase coupling function, which allows for an analytical treatment. Finally, in the third part (Chapter 5) we study the influence of non-isochronicity on the synchronization frequency in continuous, reaction diffusion systems and discrete networks of oscillators.
In the present dissertation paper we study problems related to synchronization phenomena in the presence of noise which unavoidably appears in real systems. One part of the work is aimed at investigation of utilizing delayed feedback to control properties of diverse chaotic dynamic and stochastic systems, with emphasis on the ones determining predisposition to synchronization. Other part deals with a constructive role of noise, i.e. its ability to synchronize identical self-sustained oscillators. First, we demonstrate that the coherence of a noisy or chaotic self-sustained oscillator can be efficiently controlled by the delayed feedback. We develop the analytical theory of this effect, considering noisy systems in the Gaussian approximation. Possible applications of the effect for the synchronization control are also discussed. Second, we consider synchrony of limit cycle systems (in other words, self-sustained oscillators) driven by identical noise. For weak noise and smooth systems we proof the purely synchronizing effect of noise. For slightly different oscillators and/or slightly nonidentical driving, synchrony becomes imperfect, and this subject is also studied. Then, with numerics we show moderate noise to be able to lead to desynchronization of some systems under certain circumstances. For neurons the last effect means “antireliability” (the “reliability” property of neurons is treated to be important from the viewpoint of information transmission functions), and we extend our investigation to neural oscillators which are not always limit cycle ones. Third, we develop a weakly nonlinear theory of the Kuramoto transition (a transition to collective synchrony) in an ensemble of globally coupled oscillators in presence of additional time-delayed coupling terms. We show that a linear delayed feedback not only controls the transition point, but effectively changes the nonlinear terms near the transition. A purely nonlinear delayed coupling does not affect the transition point, but can reduce or enhance the amplitude of collective oscillations.
The predictability problem
(2007)
We try to determine whether it is possible to approximate the subjective Cloze predictability measure with two types of objective measures, semantic and word n-gram measures, based on the statistical properties of text corpora. The semantic measures are constructed either by querying Internet search engines or by applying Latent Semantic Analysis, while the word n-gram measures solely depend on the results of Internet search engines. We also analyse the role of Cloze predictability in the SWIFT eye movement model, and evaluate whether other parameters might be able to take the place of predictability. Our results suggest that a computational model that generates predictability values not only needs to use measures that can determine the relatedness of a word to its context; the presence of measures that assert unrelatedness is just as important. In spite of the fact, however, that we only have similarity measures, we predict that SWIFT should perform just as well when we replace Cloze predictability with our measures.
Our dynamic Sun manifests its activity by different phenomena: from the 11-year cyclic sunspot pattern to the unpredictable and violent explosions in the case of solar flares. During flares, a huge amount of the stored magnetic energy is suddenly released and a substantial part of this energy is carried by the energetic electrons, considered to be the source of the nonthermal radio and X-ray radiation. One of the most important and still open question in solar physics is how the electrons are accelerated up to high energies within (the observed in the radio emission) short time scales. Because the acceleration site is extremely small in spatial extent as well (compared to the solar radius), the electron acceleration is regarded as a local process. The search for localized wave structures in the solar corona that are able to accelerate electrons together with the theoretical and numerical description of the conditions and requirements for this process, is the aim of the dissertation. Two models of electron acceleration in the solar corona are proposed in the dissertation: I. Electron acceleration due to the solar jet interaction with the background coronal plasma (the jet--plasma interaction) A jet is formed when the newly reconnected and highly curved magnetic field lines are relaxed by shooting plasma away from the reconnection site. Such jets, as observed in soft X-rays with the Yohkoh satellite, are spatially and temporally associated with beams of nonthermal electrons (in terms of the so-called type III metric radio bursts) propagating through the corona. A model that attempts to give an explanation for such observational facts is developed here. Initially, the interaction of such jets with the background plasma leads to an (ion-acoustic) instability associated with growing of electrostatic fluctuations in time for certain range of the jet initial velocity. During this process, any test electron that happen to feel this electrostatic wave field is drawn to co-move with the wave, gaining energy from it. When the jet speed has a value greater or lower than the one, required by the instability range, such wave excitation cannot be sustained and the process of electron energization (acceleration and/or heating) ceases. Hence, the electrons can propagate further in the corona and be detected as type III radio burst, for example. II. Electron acceleration due to attached whistler waves in the upstream region of coronal shocks (the electron--whistler--shock interaction) Coronal shocks are also able to accelerate electrons, as observed by the so-called type II metric radio bursts (the radio signature of a shock wave in the corona). From in-situ observations in space, e.g., at shocks related to co-rotating interaction regions, it is known that nonthermal electrons are produced preferably at shocks with attached whistler wave packets in their upstream regions. Motivated by these observations and assuming that the physical processes at shocks are the same in the corona as in the interplanetary medium, a new model of electron acceleration at coronal shocks is presented in the dissertation, where the electrons are accelerated by their interaction with such whistlers. The protons inflowing toward the shock are reflected there by nearly conserving their magnetic moment, so that they get a substantial velocity gain in the case of a quasi-perpendicular shock geometry, i.e, the angle between the shock normal and the upstream magnetic field is in the range 50--80 degrees. The so-accelerated protons are able to excite whistler waves in a certain frequency range in the upstream region. When these whistlers (comprising the localized wave structure in this case) are formed, only the incoming electrons are now able to interact resonantly with them. But only a part of these electrons fulfill the the electron--whistler wave resonance condition. Due to such resonant interaction (i.e., of these electrons with the whistlers), the electrons are accelerated in the electric and magnetic wave field within just several whistler periods. While gaining energy from the whistler wave field, the electrons reach the shock front and, subsequently, a major part of them are reflected back into the upstream region, since the shock accompanied with a jump of the magnetic field acts as a magnetic mirror. Co-moving with the whistlers now, the reflected electrons are out of resonance and hence can propagate undisturbed into the far upstream region, where they are detected in terms of type II metric radio bursts. In summary, the kinetic energy of protons is transfered into electrons by the action of localized wave structures in both cases, i.e., at jets outflowing from the magnetic reconnection site and at shock waves in the corona.
The solar tachocline is a thin transition layer between the solar radiative zone rotating uniformly and the solar convection zone, which has a mainly latitudinal differential rotation profile. This layer has a thickness of less than $0.05R_{\sun}$ and is subject to extreme radial as well as latitudinal shears. Helioseismological estimates put this layer at roughly $0.7R_{\sun}$. The tachocline mostly resides in the sub-adiabatic, non-turbulent radiative interior, except for a small overlap with the convection zone on the top. Many proposed dynamo mechanisms involve strong toroidal magnetic fields in this transition region. The exact mechanisms behind the formation of such a thin layer is still disputed. A very plausible mechanism is the one involving a weak, relic poloidal magnetic field trapped inside the radiative zone, which is responsible for expelling differential rotation outwards. This was first proposed by \citet{RK97}. The present work develops this idea with numerical simulations including additional effects like meridional circulation. It is shown that a relic field of 1~Gauss or smaller would be sufficient to explain the observed thickness of the tachocline. The stability of the solar tachocline is addressed as the next part of the problem. It is shown that the tachocline is stable up to a differential rotation of 52\% in the absence of magnetic fields. This is a new finding as compared to the earlier two dimensional models which estimated the solar differential rotation (about 28\%) to be marginally stable or even unstable. The changed stability limit is attributed to the changed stability criterion of the 3-dimensional model which also involves radial gradients of the angular velocity. In the presence of toroidal magnetic field belts, the lowest non-axisymmetric mode is shown to be the most unstable one for the radiative part of the tachocline. It is estimated that the tachocline would become unstable for toroidal fields exceeding about 100~Gauss. With both formation and stability questions satisfactorily addressed, this work presents the most comprehensive analysis of the physical processes in the solar tachocline to date.
Our Solar system contains a large amount of dust, containing valuable information about our close cosmic environment. If created in a planet's system, the particles stay predominantly in its vicinity and can form extended dust envelopes, tori or rings around them. A fascinating example of these complexes are Saturnian rings containing a wide range of particles sizes from house-size objects in the main rings up to micron-sized grains constituting the E ring. Other example are ring systems in general, containing a large fraction of dust or also the putative dust-tori surrounding the planet Mars. The dynamical life'' of such circumplanetary dust populations is the main subject of our study. In this thesis a general model of creation, dynamics and death'' of circumplanetary dust is developed. Endogenic and exogenic processes creating dust at atmosphereless bodies are presented. Then, we describe the main forces influencing the particle dynamics and study dynamical responses induced by stochastic fluctuations. In order to estimate the properties of steady-state population of considered dust complex, the grain mean lifetime as a result of a balance of dust creation, life'' and loss mechanisms is determined. The latter strongly depends on the surrounding environment, the particle properties and its dynamical history. The presented model can be readily applied to study any circumplanetary dust complex. As an example we study dynamics of two dust populations in the Solar system. First we explore the dynamics of particles, ejected from Martian moon Deimos by impacts of micrometeoroids, which should form a putative tori along the orbit of the moon. The long-term influence of indirect component of radiation pressure, the Poynting-Robertson drag gives rise in significant change of torus geometry. Furthermore, the action of radiation pressure on rotating non-spherical dust particles results in stochastic dispersion of initially confined ensemble of particles, which causes decrease of particle number densities and corresponding optical depth of the torus. Second, we investigate the dust dynamics in the vicinity of Saturnian moon Enceladus. During three flybys of the Cassini spacecraft with Enceladus, the on-board dust detector registered a micron-sized dust population around the moon. Surprisingly, the peak of the measured impact rate occurred 1 minute before the closest approach of the spacecraft to the moon. This asymmetry of the measured rate can be associated with locally enhanced dust production near Enceladus south pole. Other Cassini instruments also detected evidence of geophysical activity in the south polar region of the moon: high surface temperature and extended plumes of gas and dust leaving the surface. Comparison of our results with this in situ measurements reveals that the south polar ejecta may provide the dominant source of particles sustaining the Saturn's E ring.
In nature one commonly finds interacting complex oscillators which by the coupling scheme form small and large networks, e.g. neural networks. Surprisingly, the oscillators can synchronize, still preserving the complex behavior. Synchronization is a fundamental phenomenon in coupled nonlinear oscillators. Synchronization can be enhanced at different levels, that is, the constraints on which the synchronization appears. Those can be in the trajectory amplitude, requiring the amplitudes of both oscillators to be equal, giving place to complete synchronization. Conversely, the constraint could also be in a function of the trajectory, e.g. the phase, giving place to phase synchronization (PS). In this case, one requires the phase difference between both oscillators to be finite for all times, while the trajectory amplitude may be uncorrelated. The study of PS has shown its relevance to important technological problems, e.g. communication, collective behavior in neural networks, pattern formation, Parkinson disease, epilepsy, as well as behavioral activities. It has been reported that it mediates processes of information transmission and collective behavior in neural and active networks and communication processes in the Human brain. In this work, we have pursed a general way to analyze the onset of PS in small and large networks. Firstly, we have analyzed many phase coordinates for compact attractors. We have shown that for a broad class of attractors the PS phenomenon is invariant under the phase definition. Our method enables to state about the existence of phase synchronization in coupled chaotic oscillators without having to measure the phase. This is done by observing the oscillators at special times, and analyzing whether this set of points is localized. We have show that this approach is fruitful to analyze the onset of phase synchronization in chaotic attractors whose phases are not well defined, as well as, in networks of non-identical spiking/bursting neurons connected by chemical synapses. Moreover, we have also related the synchronization and the information transmission through the conditional observations. In particular, we have found that inside a network clusters may appear. These can be used to transmit more than one information, which provides a multi-processing of information. Furthermore, These clusters provide a multichannel communication, that is, one can integrate a large number of neurons into a single communication system, and information can arrive simultaneously at different places of the network.
Ziel dieser Arbeit ist die phänomenologische Untersuchung der Feuchteempfindlichkeit der elektrischen Eigenschaften dünner Polymerschichten. Diese Untersuchungen stellen gleichzeitig Vorarbeiten zur Entwicklung von Prototypen von zwei polymeren Dünnschicht-Feuchtesensoren dar, die sich durch die spezielle Auswahl der feuchtesensitiven Materialien jeweils durch eine besondere Eigenschaft gegenüber kommerziellen Massenprodukten auszeichnen. Ziel der Entwicklungsarbeiten für den ersten Prototypen war die Konstruktion eines schnellen Feuchtesensors, der plötzliche und sprunghafte Feuchteänderungen in der umgebenden Atmosphäre möglichst rasch detektieren kann. Dafür wurden dünne Schichten von Poly-DADMAC auf Interdigitalstrukturen aufgebracht, die einen möglichst direkten Kontakt zwischen feuchtesensitiver Schicht und umgebender, feuchter Atmosphäre gewährleisten. Als Messgrößen dienten die Wechselstromgrößen Widerstand und Kapazität der Schichten. Die Feuchtekennlinien der Schichten zeigen gute Konstanz und hohe Reproduzierbarkeit. Der Widerstand der Schichten ändert sich durch den Einfluss von Feuchte je nach Schichtdicke um 3 bis 5 Größenordnungen und eignet sich als Messgröße für die Feuchtigkeit im gesamten Feuchtebereich. Die Hysterese der Filme konnte auf kleiner als 2,5% r.F. bestimmt werden, die Reproduzierbarkeit auf besser als 1% r.F. Die Ansprechzeit der Schichten lässt sich schichtdickenabhängig zu 1 bis 10 Sekunden bestimmen. Hierbei zeigen besonders die dünnen Schichten kurze Ansprechzeiten. Zielstellung für den zweiten Feuchtesensor war die Entwicklung eines Prototypen, dessen sensitive Schicht sich biostatisch und biozid verhält, so dass er in biotischen Umgebungen eingesetzt werden kann. Es wurden fünf Polysulfobetaine synthetisiert, deren Biozidität und Biostatik mit dem Kontakttest nach Rönnpagel, dem ISO846-Test und Abbautests bestimmt wurde. Zwei Polymere – Poly-DMMAAPS (BT2) und Poly-[MSA-Styren-Sulfobetain] (BT5) – erwiesen sich als ausreichend biozid und biostatisch. Schichten dieser Polymere wurden auf Interdigitalstrukturen aufgezogen, anschließend wurden die Kennlinien dieser Proben aufgenommen. Die Messwerte zeigen für beide Polymere gute Konstanz und eine hohe Reproduzierbarkeit. BT2-Proben sind zwischen 20% und 80% r.F. besonders empfindlich und zeigen über einen Monat keine Langzeitdrift. Vernetzte Proben zeigen bis 50°C keinen temperaturbedingten Abfall der Feuchteempfindlichkeit. Der Einsatz vernetzter BT5-Schichten als kapazitiver Feuchtesensor ist bis etwa 70°C möglich, die Schichten sind selbst nach Lagerung im Hochvakuum und mehrfacher Betauung stabil. Damit liegen zwei funktionsfähige Prototypen von Feuchtesensoren vor, für die die meisten Kennwerte denen von vergleichbaren kommerziellen Feuchtesensoren entsprechen. Gleichzeitig zeichnen sie sich aber durch eine sehr niedrige Ansprechzeit bzw. eine ausreichende Lebensdauer unter biotischen Bedingungen aus.
Box-Simulationen von rotierender Magnetokonvektion im flüssigen Erdkern Numerische Simulationen der 3D-MHD Gleichungen sind mit Hilfe des Codes NIRVANA durchgeführt worden. Die Gleichungen für kompressible rotierende Magnetokonvektion wurden für erdähnliche Bedingungen numerisch in einer kartesischen Box gelöst. Charakteristische Eigenschaften mittlerer Größen, wie der Turbulenz-Intensität oder der turbulente Wärmefluss, die durch die kombinierte Wirkung kleinskaliger Fluktuationen entstehen, wurden bestimmt. Die Korrelationslänge der Turbulenz hängt signifikant von der Stärke und der Orientierung des Magnetfeldes ab, und das anisotrope Verhalten der Turbulenz aufgrund von Coriolis- und Lorentzkraft ist für schnellere Rotation wesentlich stärker ausgeprägt. Die Ausbildung eines isotropen Verhaltens auf kleinen Skalen unter dem Einfluss von Rotation alleine wird bereits durch ein schwaches Magnetfeld verhindert. Dies resultiert in einer turbulenten Strömung, die durch die vertikale Komponente dominiert wird. In Gegenwart eines horizontalen Magnetfeldes nimmt der vertikale turbulente Wärmefluss leicht mit zunehmender Feldstärke zu, so dass die Kühlung eines rotierenden Systems verbessert wird. Der horizontale Wärmetransport ist stets westwärts und in Richtung der Pole orientiert. Letzteres kann unter Umständen die Quelle für eine großskalige meridionale Strömung darstellen, während erstes in globalen Simulationen mit nicht axialsymmetrischen Randbedingungen für den Wärmefluss von Bedeutung ist. Die mittlere elektromotorische Kraft, die die Erzeugung von magnetischem Fluss durch die Turbulenz beschreibt, wurde unmittelbar aus den Lösungen für Geschwindigkeit und Magnetfeld berechnet. Hieraus konnten die entsprechenden α-Koeffizienten hergeleitet werden. Aufgrund der sehr schwachen Dichtestratifizierung ändert der α-Effekt sein Vorzeichen nahezu exakt in der Mitte der Box. Der α-Effekt ist positiv in der oberen Hälfte und negativ in der unteren Hälfte einer auf der Nordhalbkugel rotierenden Box. Für ein starkes Magnetfeld ergibt sich zudem eine deutliche abwärts orientierte Advektion von magnetischem Fluss. Ein Mean-Field Modell des Geodynamos wurde konstruiert, das auf dem α-Effekt basiert, wie er aus den Box-Simulationen berechnet wurde. Für eine äußerst beschränkte Klasse von radialen α-Profilen weist das lineare α^2-Modell Oszillationen auf einer Zeitskala auf, die durch die turbulente Diffusionszeit bestimmt wird. Die wesentlichen Eigenschaften der periodischen Lösungen werden präsentiert, und der Einfluss der Größe des inneren Kerns auf die Charakteristiken des kritischen Bereichs, innerhalb dessen oszillierende Lösungen auftreten, wurden untersucht. Reversals werden als eine halbe Oszillation interpretiert. Sie sind ein recht seltenes Ereignis, da sie lediglich dann stattfinden können, wenn das α-Profil ausreichend lange in dem periodische Lösungen erlaubenden Bereich liegt. Aufgrund starker Fluktuationen auf der konvektiven Zeitskala ist die Wahrscheinlichkeit eines solchen Reversals relativ klein. In einem einfachen nicht-linearen Mean-Field Modell mit realistischen Eingabeparametern, die auf den Box-Simulationen beruhen, konnte die Plausibilität des Reversal-Modells anhand von Langzeitsimulationen belegt werden.
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence.
Nowadays, colloidal rods can be synthesized in large amounts. The rods are typically cylindrically and their length ranges from several nanometers to a few micrometers. In solution, systems of colloidal rodlike molecules or aggregates can form liquid-crystalline phases with long-range orientational and spatial order. In the present work, we investigate structure formation and fractionation in systems of rodlike colloids with the help of Monte Carlo simulations in the NPT ensemble. Repulsive interactions can successfully be mimicked by the hard rod model, which has been studied extensively in the past. In many cases, attractive interactions like van der Waals or depletion forces cannot be neglected, however. In the first part of this work, the phase behavior of monodisperse attractive rods is characterized for different interaction strengths. Phase diagrams as a function of rod length and pressure are presented. Most systems of synthesized mesoscopic rods have a polydisperse length distribution as a consequence of the longitudinal growth process of the rods. For many technical and research applications, a rather small polydispersity is desired in order to have well defined material properties. The polydispersity can be reduced by a spatial demixing (fractionation) of long and short rods. Fractionation and structure formation is studied in a tridisperse and a polydisperse bulk suspension of rods. We observe that the resulting structures depend distinctly on the interaction strength. The fractionation in the system is strongly enhanced with increasing interaction strength. Suspensions are typically confined in a container. We also examine the influence of adjacent substrates in systems of tridisperse and polydisperse rod suspensions. Three different substrate types are studied in detail: a planar wall, a corrugated substrate, and a substrate with rectangular cavities. We analyze the fluid structure close to the substrate and substrate controlled fractionation. The spatial arrangement of long and short rods in front of the substrate depends sensitively on the substrate structure and the pressure. Rods with a predefined length are segregated at substrates with rectangular cavities.
In this thesis the interplay between hydrodynamic transport and specific adhesion is theoretically investigated. An important biological motivation for this work is the rolling adhesion of white blood cells experimentally investigated in flow chambers. There, specific adhesion is mediated by weak bonds between complementary molecular building blocks which are either located on the cell surface (receptors) or attached to the bottom plate of the flow chamber (ligands). The model system under consideration is a hard sphere covered with receptors moving above a planar ligand-bearing wall. The motion of the sphere is influenced by a simple shear flow, deterministic forces, and Brownian motion. An algorithm is given that allows to numerically simulate this motion as well as the formation and rupture of bonds between receptors and ligands. The presented algorithm spatially resolves receptors and ligands. This opens up the perspective to apply the results also to flow chamber experiments done with patterned substrates based on modern nanotechnological developments. In the first part the influence of flow rate, as well as of the number and geometry of receptors and ligands, on the probability for initial binding is studied. This is done by determining the mean time that elapses until the first encounter between a receptor and a ligand occurs. It turns out that besides the number of receptors, especially the height by which the receptors are elevated above the surface of the sphere plays an important role. These findings are in good agreement with observations of actual biological systems like white blood cells or malaria-infected red blood cells. Then, the influence of bonds which have formed between receptors and ligands, but easily rupture in response to force, on the motion of the sphere is studied. It is demonstrated that different states of motion-for example rolling-can be distinguished. The appearance of these states depending on important model parameters is then systematically investigated. Furthermore, it is shown by which bond property the ability of cells to stably roll in a large range of applied flow rates is increased. Finally, the model is applied to another biological process, the transport of spherical cargo particles by molecular motors. In analogy to the so far described systems molecular motors can be considered as bonds that are able to actively move. In this part of the thesis the mean distance the cargo particles are transported is determined.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
Variationen der stratosphärischen Residualzirkulation und ihr Einfluss auf die Ozonverteilung
(2006)
Die Residualzirkulation entspricht der mittleren Massenzirkulation und beschreibt die im zonalen Mittel stattfindenden meridionalen Transportprozesse. Die Variationen der Residualzirkulation bestimmen gemeinsam mit dem anthropogen verursachten Ozonabbau die jährlichen Schwankungen der Ozongesamtsäule im arktischen Frühling. In der vorliegenden Arbeit wird die Geschwindigkeit des arktischen Astes der Residualzirkulation aus atmosphärischen Daten gewonnen. Zu diesem Zweck wird das diabatische Absinken im Polarwirbel mit Hilfe von Trajektorienrechnungen bestimmt. Die vertikalen Bewegungen der Luftpakete können mit vertikalen Windfeldern oder entsprechend einem neuen Ansatz mit diabatischen Heizraten angetrieben werden. Die Eingabedaten stammen aus dem 45 Jahre langen Reanalyse-Datensatz des "European Centre for Medium Range Weather Forecast" (ECMWF). Außerdem kann für die Jahre ab 1984 die operationelle ECMWF-Analyse verwendet werden. Die Qualität und Robustheit der Heizraten- und Trajektorienrechnungen werden durch Sensitivitätsstudien und Vergleiche mit anderen Modellen untermauert. Anschließend werden umfangreiche Trajektorienensemble statistisch ausgewertet, um ein detailliertes, zeit- und höhenaufgelöstes Bild des diabatischen Absinkens zu ermitteln. In diesem Zusammenhang werden zwei Methoden entwickelt, um das Absinken gemittelt im Polarwirbel oder als Funktion der äquivalenten Breite zu bestimmen. Es wird gezeigt, dass es notwendig ist den Lagrangeschen auf Trajektorienrechnungen basierenden Ansatz zu verfolgen, da die einfachen Eulerschen Mittel Abweichungen zu den Lagrangeschen Vertikalgeschwindigkeiten aufweisen. Das wirbelgemittelte Absinken wird für einzelne Winter mit dem beobachteten Absinken langlebiger Spurengase und anderen Modellstudien verglichen. Der Vergleich zeigt, dass das Absinken basierend auf den vertikalen Windfeldern der ECMWF-Datensätze den Nettoluftmassentransport durch die Residualzirkulation sehr stark überschätzt. Der neue Ansatz basierend auf den Heizraten ergibt hingegen realistische Ergebnisse und wird aus diesem Grund für alle Rechnungen verwendet. Es wird erstmalig eine Klimatologie des diabatischen Absinkens über einen fast fünf Jahrzehnte umfassenden Zeitraum erstellt. Die Klimatologie beinhaltet das vertikal und zeitlich aufgelöste diabatische Absinken gemittelt über den gesamten Polarwirbel und Informationen über die räumliche Struktur des vertikalen Absinkens. Die natürliche Jahr-zu-Jahr Variabilität des diabatischen Absinkens ist sehr stark ausgeprägt. Es wird gezeigt, dass zwischen der ECMWF-Zeitreihe des diabatischen Absinkens und der Zeitreihe aus einem unabhängig analysierten Temperaturdatensatz hohe Korrelationen bestehen. Erstmals wird der Einfluss von Transportprozessen auf die Ozongesamtsäule im arktischen Frühling direkt quantifiziert. Es wird gezeigt, dass die Jahr-zu-Jahr Variabilität der Ozongesamtsäule im arktischen Frühling zu gleichen Anteilen durch die Variabilität der dynamischen Komponente und durch die Variabilität der chemischen Komponente beeinflusst wird. Die gefundenen Variabilitäten von diabatischem Absinken und Ozoneintrag in hohen Breiten werden mit der vertikalen Ausbreitung planetarer Wellen aus der Troposphäre in die Stratosphäre in Beziehung gesetzt.
The primary objective of this work was to develop a laser source for fundamental investigations in the field of laser – materials interactions. In particular it is supposed to facilitate the study of the influence of the temporal energy distribution such as the interaction between adjacent pulses on ablation processes. Therefore, the aim was to design a laser with a highly flexible and easily controllable temporal energy distribution. The laser to meet these demands is an SBS-laser with optional active mode-locking. The nonlinear reflectivity of the SBS-mirror leads to a passive Q-switching and issues ns-pulse bursts with µs spacing. The pulse train parameters such as pulse duration, pulse spacing, pulse energy and number of pulses within a burst can be individually adjusted by tuning the pump parameters and the starting conditions for the laser. Another feature of the SBS-reflection is phase conjugation, which leads to an excellent beam quality thanks to the compensation of phase distortions. Transverse fundamental mode operation and a beam quality better than 1.4 times diffraction limited can be maintained for average output powers of up to 10 W. In addition to the dynamics on a ns-timescale described above, a defined splitting up of each ns-pulse into a train of ps-pulses can be achieved by additional active mode-locking. This twofold temporal focussing of the intensity leads to single pulse energies of up to 2 mJ at pulse durations of approximately 400 ps which corresponds to a pulse peak power of 5 MW. While the pulse duration is of the same order of magnitude as those of other passively Q-switched lasers with simultaneous mode-locking, the pulse energy and pulse peak power exceeds the values of these systems found in the literature by an order of magnitude. To the best of my knowledge the laser presented here is the first implementation of a self-starting mode-locked SBS-laser oscillator. In order to gain a better understanding and control of the transient output of the laser two complementary numerical models were developed. The first is based on laser rate equations which are solved for each laser mode individually while the mode-locking dynamics are calculated from the resultant transient spectrum. The rate equations consider the mean photon densities in the resonator, therefore the propagation of the light inside the resonator is not properly displayed. The second model, in contrast, introduces a spatial resolution of the resonator and hence the propagation inside the resonator can more accurately be considered. Consequently, a mismatch between the loss modulation frequency and the resonator round trip time can be conceived. The model calculates all dynamics in the time domain and therefore the spectral influences such as the Stokes-shift have to be neglected. Both models achieve an excellent reproduction of the ns-dynamics that are generated by the SBS-Q-switch. Separately, each model fails to reproduce all aspects of the ps-dynamics of the SBS-laser in detail. This can be attributed to the complexity of the numerous physical processes involved in this system. But thanks to their complementary nature they provide a very useful tool for investigating the various influences on the dynamics of the mode-locked SBS-laser individually. These aspects can eventually be recomposed to give a complete picture of the mechanisms which govern the output dynamics. Among the aspects under scrutiny were in particular the start resonator quality which determines the starting condition for the SBS-Q-switch, the modulation depth of the AOM and the phonon lifetime as well as the Brillouin-frequency of the SBS-medium. The numerical simulations and the experiments have opened several doors inviting further investigations and promising a potential for further improvement of the experimental results: The results of the simulations in combination with the experimental results which determined the starting conditions for the simulations leave no doubt that the bandwidth generation can primarily be attributed to the SBS-Stokes-shift during the buildup of the Q-switch pulse. For each resonator round trip, bandwidth is generated by shifting a part of the revolving light in frequency. The magnitude of the frequency shift corresponds to the Brillouin-frequency which is a constant of the SBS material and amounts in the case of SF6 to 240 MHz. The modulation of the AOM merely provides an exchange of population between spectrally adjacent modes and therefore diminishes a modulation in the spectrum. By use of a material with a Brillouin-frequency in the GHz range the bandwidth generation can be considerably accelerated thereby shortening the pulse duration. Also, it was demonstrated that yet another nonlinear effect of the SBS can be exploited: If the phonon lifetime is short compared to the resonator round trip time we obtain a modulation in the SBS-reflectivity that supports the modulation of the AOM. The application of an external optical feedback by a conventional mirror turns out to be an alternative to the AOM in synchronizing the longitudinal resonator modes. The interesting feature about this system is that it is ― although highly complex in the physical processes and the temporal output dynamics ― very simple and inexpensive from a technical point of view. No expensive modulators and no control electronics are necessary. Finally, the numerical models constitute a powerful tool for the investigation of emission dynamics of complex laser systems on arbitrary timescales and can also display the spectral evolution of the laser output. In particular it could be demonstrated that differences in the results of the complementary models vanish for systems of lesser complexity.
Uncertainties are pervasive in the Earth System modelling. This is not just due to a lack of knowledge about physical processes but has its seeds in intrinsic, i.e. inevitable and irreducible, uncertainties concerning the process of modelling as well. Therefore, it is indispensable to quantify uncertainty in order to determine, which are robust results under this inherent uncertainty. The central goal of this thesis is to explore how uncertainties map on the properties of interest such as phase space topology and qualitative dynamics of the system. We will address several types of uncertainty and apply methods of dynamical systems theory on a trendsetting field of climate research, i.e. the Indian monsoon. For the systematic analysis concerning the different facets of uncertainty, a box model of the Indian monsoon is investigated, which shows a saddle node bifurcation against those parameters that influence the heat budget of the system and that goes along with a regime shift from a wet to a dry summer monsoon. As some of these parameters are crucially influenced by anthropogenic perturbations, the question is whether the occurrence of this bifurcation is robust against uncertainties in parameters and in the number of considered processes and secondly, whether the bifurcation can be reached under climate change. Results indicate, for example, the robustness of the bifurcation point against all considered parameter uncertainties. The possibility of reaching the critical point under climate change seems rather improbable. A novel method is applied for the analysis of the occurrence and the position of the bifurcation point in the monsoon model against parameter uncertainties. This method combines two standard approaches: a bifurcation analysis with multi-parameter ensemble simulations. As a model-independent and therefore universal procedure, this method allows investigating the uncertainty referring to a bifurcation in a high dimensional parameter space in many other models. With the monsoon model the uncertainty about the external influence of El Niño / Southern Oscillation (ENSO) is determined. There is evidence that ENSO influences the variability of the Indian monsoon, but the underlying physical mechanism is discussed controversially. As a contribution to the debate three different hypotheses are tested of how ENSO and the Indian summer monsoon are linked. In this thesis the coupling through the trade winds is identified as key in linking these two key climate constituents. On the basis of this physical mechanism the observed monsoon rainfall data can be reproduced to a great extent. Moreover, this mechanism can be identified in two general circulation models (GCMs) for the present day situation and for future projections under climate change. Furthermore, uncertainties in the process of coupling models are investigated, where the focus is on a comparison of forced dynamics as opposed to fully coupled dynamics. The former describes a particular type of coupling, where the dynamics from one sub-module is substituted by data. Intrinsic uncertainties and constraints are identified that prevent the consistency of a forced model with its fully coupled counterpart. Qualitative discrepancies between the two modelling approaches are highlighted, which lead to an overestimation of predictability and produce artificial predictability in the forced system. The results suggest that bistability and intermittent predictability, when found in a forced model set-up, should always be cross-validated with alternative coupling designs before being taken for granted. All in this, this thesis contributes to the fundamental issue of dealing with uncertainties the climate modelling community is confronted with. Although some uncertainties allow for including them in the interpretation of the model results, intrinsic uncertainties could be identified, which are inevitable within a certain modelling paradigm and are provoked by the specific modelling approach.
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
When Galactic microlensing events of stars are observed, one usually measures a symmetric light curve corresponding to a single lens, or an asymmetric light curve, often with caustic crossings, in the case of a binary lens system. In principle, the fraction of binary stars at a certain separation range can be estimated based on the number of measured microlensing events. However, a binary system may produce a light curve which can be fitted well as a single lens light curve, in particullary if the data sampling is poor and the errorbars are large. We investigate what fraction of microlensing events produced by binary stars for different separations may be well fitted by and hence misinterpreted as single lens events for various observational conditions. We find that this fraction strongly depends on the separation of the binary components, reaching its minimum at between 0.6 and 1.0 Einstein radius, where it is still of the order of 5% The Einstein radius is corresponding to few A.U. for typical Galactic microlensing scenarios. The rate for misinterpretation is higher for short microlensing events lasting up to few months and events with smaller maximum amplification. For fixed separation it increases for binaries with more extreme mass ratios. Problem of degeneracy in photometric light curve solution between binary lens and binary source microlensing events was studied on simulated data, and data observed by the PLANET collaboration. The fitting code BISCO using the PIKAIA genetic algorithm optimizing routine was written for optimizing binary-source microlensing light curves observed at different sites, in I, R and V photometric bands. Tests on simulated microlensing light curves show that BISCO is successful in finding the solution to a binary-source event in a very wide parameter space. Flux ratio method is suggested in this work for breaking degeneracy between binary-lens and binary-source photometric light curves. Models show that only a few additional data points in photometric V band, together with a full light curve in I band, will enable breaking the degeneracy. Very good data quality and dense data sampling, combined with accurate binary lens and binary source modeling, yielded the discovery of the lowest-mass planet discovered outside of the Solar System so far, OGLE-2005-BLG-390Lb, having only 5.5 Earth masses. This was the first observed microlensing event in which the degeneracy between a planetary binary-lens and an extreme flux ratio binary-source model has been successfully broken. For events OGLE-2003-BLG-222 and OGLE-2004-BLG-347, the degeneracy was encountered despite of very dense data sampling. From light curve modeling and stellar evolution theory, there was a slight preference to explain OGLE-2003-BLG-222 as a binary source event, and OGLE-2004-BLG-347 as a binary lens event. However, without spectra, this degeneracy cannot be fully broken. No planet was found so far around a white dwarf, though it is believed that Jovian planets should survive the late stages of stellar evolution, and that white dwarfs will retain planetary systems in wide orbits. We want to perform high precision astrometric observations of nearby white dwarfs in wide binary systems with red dwarfs in order to find planets around white dwarfs. We selected a sample of observing targets (WD-RD binary systems, not published yet), which can possibly have planets around the WD component, and modeled synthetic astrometric orbits which can be observed for these targets using existing and future astrometric facilities. Modeling was performed for the astrometric accuracy of 0.01, 0.1, and 1.0 mas, separation between WD and planet of 3 and 5 A.U., binary system separation of 30 A.U., planet masses of 10 Earth masses, 1 and 10 Jupiter masses, WD mass of 0.5M and 1.0 Solar masses, and distances to the system of 10, 20 and 30 pc. It was found that the PRIMA facility at the VLTI will be able to detect planets around white dwarfs once it is operating, by measuring the astrometric wobble of the WD due to a planet companion, down to 1 Jupiter mass. We show for the simulated observations that it is possible to model the orbits and find the parameters describing the potential planetary systems.