Refine
Has Fulltext
- yes (406) (remove)
Year of publication
Document Type
- Doctoral Thesis (406) (remove)
Keywords
- Synchronisation (17)
- Nichtlineare Dynamik (10)
- synchronization (10)
- Klimawandel (9)
- data analysis (9)
- Datenanalyse (8)
- Spektroskopie (8)
- Synchronization (8)
- Arktis (7)
- Astrophysik (7)
Institute
- Institut für Physik und Astronomie (406) (remove)
In this work, binding interactions between biomolecules were analyzed by a technique that is based on electrically controllable DNA nanolevers. The technique was applied to virus-receptor interactions for the first time. As receptors, primarily peptides on DNA nanostructures and antibodies were utilized. The DNA nanostructures were integrated into the measurement technique and enabled the presentation of the peptides in a controllable geometrical order. The number of peptides could be varied to be compatible to the binding sites of the viral surface proteins.
Influenza A virus served as a model system, on which the general measurability was demonstrated. Variations of the receptor peptide, the surface ligand density, the measurement temperature and the virus subtypes showed the sensitivity and applicability of the technology. Additionally, the immobilization of virus particles enabled the measurement of differences in oligovalent binding of DNA-peptide nanostructures to the viral proteins in their native environment.
When the coronavirus pandemic broke out in 2020, work on binding interactions of a peptide from the hACE2 receptor and the spike protein of the SARS-CoV-2 virus revealed that oligovalent binding can be quantified in the switchSENSE technology. It could also be shown that small changes in the amino acid sequence of the spike protein resulted in complete loss of binding. Interactions of the peptide and inactivated virus material as well as pseudo virus particles could be measured. Additionally, the switchSENSE technology was utilized to rank six antibodies for their binding affinity towards the nucleocapsid protein of SARS-CoV-2 for the development of a rapid antigen test device.
The technique was furthermore employed to show binding of a non-enveloped virus (adenovirus) and a virus-like particle (norovirus-like particle) to antibodies. Apart from binding interactions, the use of DNA origami levers with a length of around 50 nm enabled the switching of virus material. This proved that the technology is also able to size objects with a hydrodynamic diameter larger than 14 nm.
A theoretical work on diffusion and reaction-limited binding interactions revealed that the technique and the chosen parameters enable the determination of binding rate constants in the reaction-limited regime.
Overall, the applicability of the switchSENSE technique to virus-receptor binding interactions could be demonstrated on multiple examples. While there are challenges that remain, the setup enables the determination of affinities between viruses and receptors in their native environment. Especially the possibilities regarding the quantification of oligo- and multivalent binding interactions could be presented.
Giacconi et al. (1962) discovered a diffuse cosmic X-ray background with rocket experiments when they searched for lunar X-ray emission. Later satellite missions found a spectral peak in the cosmic X-ray background at ~30 keV. Imaging X-ray satellites such as ROSAT (1990-1999) were able to resolve up to 80% of the background below 2 keV into single point sources, mainly active galaxies. The cosmic X-ray background is the integration of all accreting super-massive (several million solar masses) black holes in the centre of active galaxies over cosmic time. Synthesis models need further populations of X-ray absorbed active galaxy nuclei (AGN) in order to explain the cosmic X-ray background peak at ~30 keV. Current X-ray missions such as XMM-Newton and Chandra offer the possibility of studying these additional populations. This Ph.D. thesis studies the populations that dominate the X-ray sky. For this purpose the 120 ksec XMM-Newton Marano field survey, named for an earlier optical quasar survey in the southern hemisphere, is analysed. Based on the optical follow-up observations the X-ray sources are spectroscopically classified. Optical and X-ray properties of the different X-ray source populations are studied and differences are derived. The amount of absorption in the X-ray spectra of type II AGN, which are considered as a main contributor to the X-ray background at ~30 keV, is determined. In order to extend the sample size of the rare type II AGN, this study also includes objects from another survey, the XMM-Newton Serendipitous Medium Sample. In addition, the dependence of the absorption in type II AGN with redshift and X-ray luminosity is analysed. We detected 328 X-ray sources in the Marano field. 140 sources were spectroscopically classified. We found 89 type I AGN, 36 type II AGN, 6 galaxies, and 9 stars. AGN, galaxies, and stars are clearly distinguishable by their optical and X-ray properties. Type I and II AGN do not separate clearly. They have a significant overlap in all studied properties. In a few cases the X-ray properties are in contradiction to the observed optical properties for type I and type II AGN. For example we find type II AGN that show evidence for optical absorption but are not absorbed in X-rays. Based on the additional use of near infra-red imaging (K-band), we were able to identify several of the rare type II AGN. The X-ray spectra of type II AGN from the XMM-Newton Marano field survey and the XMM-Newton Serendipitous Medium Sample were analysed. Since most of the sources have only ~40 X-ray counts in the XMM-Newton PN-detector, I carefully studied the fit results of simulated X-ray spectra as a function of fit statistic and binning method. The objects revealed only moderate absorption. In particular, I do not find any Compton-thick sources (absorbed by column densities of NH > 1.5 x 10^24 cm^−2). This gives evidence that type II AGN are not the main contributor of the X-ray background around 30 keV. Although bias effects may occur, type II AGN show no noticeable trend of the amount of absorption with redshift or X-ray luminosity.
The current thesis is focused on the properties of graphene supported by metallic substrates and specifically on the behaviour of electrons in such systems. Methods of scanning tunneling microscopy, electron diffraction and photoemission spectroscopy were applied to study the structural and electronic properties of graphene. The purpose of the first part of this work is to introduce the most relevant aspects of graphene physics and the methodical background of experimental techniques used in the current thesis.
The scientific part of this work starts with the extensive study by means of scanning tunneling microscopy of the nanostructures that appear in Au intercalated graphene on Ni(111). This study was aimed to explore the possible structural explanations of the Rashba-type spin splitting of ~100 meV experimentally observed in this system — much larger than predicted by theory. It was demonstrated that gold can be intercalated under graphene not only as a dense monolayer, but also in the form of well-periodic arrays of nanoclusters, a structure previously not reported. Such nanocluster arrays are able to decouple graphene from the strongly interacting Ni substrate and render it quasi-free-standing, as demonstrated by our DFT study. At the same time calculations confirm strong enhancement of the proximity-induced SOI in graphene supported by such nanoclusters in comparison to monolayer gold. This effect, attributed to the reduced graphene-Au distance in the case of clusters, provides a large Rashba-type spin splitting of ~60 meV.
The obtained results not only provide a possible mechanism of SOI enhancement in this particular system, but they can be also generalized for graphene on other strongly interacting substrates intercalated by nanostructures of heavy noble d metals.
Even more intriguing is the proximity of graphene to heavy sp-metals that were predicted to induce an intrinsic SOI and realize a spin Hall effect in graphene. Bismuth is the heaviest stable sp-metal and its compounds demonstrate a plethora of exciting physical phenomena. This was the motivation behind the next part of the current thesis, where structural and electronic properties of a previously unreported phase of Bi-intercalated graphene on Ir(111) were studied by means of scanning tunneling microscopy, spin- and angle-resolved photoemission spectroscopy and electron diffraction. Photoemission experiments revealed a remarkable, nearly ideal graphene band structure with strongly suppressed signatures of interaction between graphene and the Ir(111) substrate, moreover, the characteristic moiré pattern observed in graphene on Ir(111) by electron diffraction and scanning tunneling microscopy was strongly suppressed after intercalation. The whole set of experimental data evidences that Bi forms a dense intercalated layer that efficiently decouples graphene from the substrate. The interaction manifests itself only in the n-type charge doping (~0.4 eV) and a relatively small band gap at the Dirac point (~190 meV). The origin of this minor band gap is quite intriguing and in this work it was possible to exclude a wide range of mechanisms that could be responsible for it, such as induced intrinsic spin-orbit interaction, hybridization with the substrate states and corrugation of the graphene lattice. The main origin of the band gap was attributed to the A-B symmetry breaking and this conclusion found support in the careful analysis of the interference effects in photoemission that provided the band gap estimate of ~140 meV.
While the previous chapters were focused on adjusting the properties of graphene by proximity to heavy metals, graphene on its own is a great object to study various physical effects at crystal surfaces. The final part of this work is devoted to a study of surface scattering resonances by means of photoemission spectroscopy, where this effect manifests itself as a distinct modulation of photoemission intensity. Though scattering resonances were widely studied in the past by means of electron diffraction, studies about their observation in photoemission experiments started to appear only recently and they are very scarce.
For a comprehensive study of scattering resonances graphene was selected as a versatile model system with adjustable properties. After the theoretical and historical introduction to the topic of scattering resonances follows a detailed description of the unusual features observed in the photoemission spectra obtained in this work and finally the equivalence between these features and scattering resonances is proven. The obtained photoemission results are in a good qualitative agreement with the existing theory, as verified by our calculations in the framework of the interference model. This simple model gives a suitable explanation for the general experimental observations.
The possibilities of engineering the scattering resonances were also explored. A systematic study of graphene on a wide range of substrates revealed that the energy position of the resonances is in a direct relation to the magnitude of charge transfer between graphene and the substrate. Moreover, it was demonstrated that the scattering resonances in graphene on Ir(111) can be suppressed by nanopatterning either by a superlattice of Ir nanoclusters or by atomic hydrogen. These effects were attributed to strong local variations of tork function and/or destruction of long-range order of thephene lattice. The tunability of scattering resonances can be applied for optoelectronic devices based on graphene. Moreover, the results of this study expand the general understanding of the phenomenon of scattering resonances and are applicable to many other materials besides graphene.
We present an application of imprecise probability theory to the quantification of uncertainty in the integrated assessment of climate change. Our work is motivated by the fact that uncertainty about climate change is pervasive, and therefore requires a thorough treatment in the integrated assessment process. Classical probability theory faces some severe difficulties in this respect, since it cannot capture very poor states of information in a satisfactory manner. A more general framework is provided by imprecise probability theory, which offers a similarly firm evidential and behavioural foundation, while at the same time allowing to capture more diverse states of information. An imprecise probability describes the information in terms of lower and upper bounds on probability. For the purpose of our imprecise probability analysis, we construct a diffusion ocean energy balance climate model that parameterises the global mean temperature response to secular trends in the radiative forcing in terms of climate sensitivity and effective vertical ocean heat diffusivity. We compare the model behaviour to the 20th century temperature record in order to derive a likelihood function for these two parameters and the forcing strength of anthropogenic sulphate aerosols. Results show a strong positive correlation between climate sensitivity and ocean heat diffusivity, and between climate sensitivity and absolute strength of the sulphate forcing. We identify two suitable imprecise probability classes for an efficient representation of the uncertainty about the climate model parameters and provide an algorithm to construct a belief function for the prior parameter uncertainty from a set of probability constraints that can be deduced from the literature or observational data. For the purpose of updating the prior with the likelihood function, we establish a methodological framework that allows us to perform the updating procedure efficiently for two different updating rules: Dempster's rule of conditioning and the Generalised Bayes' rule. Dempster's rule yields a posterior belief function in good qualitative agreement with previous studies that tried to constrain climate sensitivity and sulphate aerosol cooling. In contrast, we are not able to produce meaningful imprecise posterior probability bounds from the application of the Generalised Bayes' Rule. We can attribute this result mainly to our choice of representing the prior uncertainty by a belief function. We project the Dempster-updated belief function for the climate model parameters onto estimates of future global mean temperature change under several emissions scenarios for the 21st century, and several long-term stabilisation policies. Within the limitations of our analysis we find that it requires a stringent stabilisation level of around 450 ppm carbon dioxide equivalent concentration to obtain a non-negligible lower probability of limiting the warming to 2 degrees Celsius. We discuss several frameworks of decision-making under ambiguity and show that they can lead to a variety of, possibly imprecise, climate policy recommendations. We find, however, that poor states of information do not necessarily impede a useful policy advice. We conclude that imprecise probabilities constitute indeed a promising candidate for the adequate treatment of uncertainty in the integrated assessment of climate change. We have constructed prior belief functions that allow much weaker assumptions on the prior state of information than a prior probability would require and, nevertheless, can be propagated through the entire assessment process. As a caveat, the updating issue needs further investigation. Belief functions constitute only a sensible choice for the prior uncertainty representation if more restrictive updating rules than the Generalised Bayes'Rule are available.
Mathematik spielt im Physikunterricht eine nicht unerhebliche Rolle - wenn auch eine zwiespältige. Oft wird sie sogar zum Hindernis beim Lernen von Physik und kann ihr emanzipatorisches Potenzial nicht entfalten. Die vorliegende Arbeit stellt zwei Bausteine für eine begründete Konzeption zum Umgang mit Mathematik beim Lernen von Physik zur Verfügung. Im Theorieteil der Arbeit werden zum Einen wissenschaftstheoretische Aspekte der Rolle der Mathematik in der Physik aufgearbeitet und der physikdidaktischen Forschungsgemeinschaft im Zusammenhang zugänglich gemacht. Zum anderen werden Forschungsergebnisse zu Vorstellungen Lernender über Physik und Mathematik sowie im Bereich der Epistemologie zusammengestellt. Im empirischen Teil der Arbeit werden Vorstellungen zur Rolle der Mathematik in der Physik von Schülerinnen und Schülern der Klassenstufen 10 und 12 sowie Physik-Lehramtstudierenden im Grundstudium mit Hilfe eines Fragebogens erhoben und unter Verwendung inhaltsanalytischer bzw. statistischer Methoden ausgewertet. Die Ergebnisse zeigen unter Anderem, dass Mathematik im Physikunterricht entgegen gängiger Meinungen bei den Lernenden nicht negativ, aber zumindest bei jüngeren Lernenden formal und algorithmisch konnotiert ist.
Nonlinear multistable systems under the influence of noise exhibit a plethora of interesting dynamical properties. A medium noise level causes hopping between the metastable states. This attractorhopping process is characterized through laminar motion in the vicinity of the attractors and erratic motion taking place on chaotic saddles, which are embedded in the fractal basin boundary. This leads to noise-induced chaos. The investigation of the dissipative standard map showed the phenomenon of preference of attractors through the noise. It means, that some attractors get a larger probability of occurrence than in the noisefree system. For a certain noise level this prefernce achieves a maximum. Other attractors are occur less often. For sufficiently high noise they are completely extinguished. The complexity of the hopping process is examined for a model of two coupled logistic maps employing symbolic dynamics. With the variation of a parameter the topological entropy, which is used together with the Shannon entropy as a measure of complexity, rises sharply at a certain value. This increase is explained by a novel saddle merging bifurcation, which is mediated by a snapback repellor. Scaling laws of the average time spend on one of the formerly disconnected parts and of the fractal dimension of the connected saddle describe this bifurcation in more detail. If a chaotic saddle is embedded in the open neighborhood of the basin of attraction of a metastable state, the required escape energy is lowered. This enhancement of noise-induced escape is demonstrated for the Ikeda map, which models a laser system with time-delayed feedback. The result is gained using the theory of quasipotentials. This effect, as well as the two scaling laws for the saddle merging bifurcation, are of experimental relevance.
Elementary particle physics is a contemporary topic in science that is slowly being integrated into high-school education. These new implementations are challenging teachers’ professional knowledge worldwide. Therefore, physics education research is faced with two important questions, namely, how can particle physics be integrated in high-school physics curricula and how best to support teachers in enhancing their professional knowledge on particle physics. This doctoral research project set up to provide better guidelines for answering these two questions by conducting three studies on high-school particle physics education.
First, an expert concept mapping study was conducted to elicit experts’ expectations on what high-school students should learn about particle physics. Overall, 13 experts in particle physics, computing, and physics education participated in 9 concept mapping rounds. The broad knowledge base of the experts ensured that the final expert concept map covers all major particle physics aspects. Specifically, the final expert concept map includes 180 concepts and examples, connected with 266 links and crosslinks. Among them are also several links to students’ prior knowledge in topics such as mechanics and thermodynamics. The high interconnectedness of the concepts shows possible opportunities for including particle physics as a context for other curricular topics. As such, the resulting expert concept map is showcased as a well-suited tool for teachers to scaffold their instructional practice.
Second, a review of 27 high-school physics curricula was conducted. The review uncovered which concepts related to particle physics can be identified in most curricula. Each curriculum was reviewed by two reviewers that followed a codebook with 60 concepts related to particle physics. The analysis showed that most curricula mention cosmology, elementary particles, and charges, all of which are considered theoretical particle physics concepts. None of the experimental particle physics concepts appeared in more than half of the reviewed curricula. Additional analysis was done on two curricular subsets, namely curricula with and curricula without an explicit particle physics chapter. Curricula with an explicit particle physics chapter mention several additional explicit particle physics concepts, namely the Standard Model of particle physics, fundamental interactions, antimatter research, and particle accelerators. The latter is an example of experimental particle physics concepts. Additionally, the analysis revealed that, overall, most curricula include Nature of Science and history of physics, albeit both are typically used as context or as a tool for teaching, respectively.
Third, a Delphi study was conducted to investigate stakeholders’ expectations regarding what teachers should learn in particle physics professional development programmes. Over 100 stakeholders from 41 countries represented four stakeholder groups, namely physics education researchers, research scientists, government representatives, and high-school teachers. The study resulted in a ranked list of the 13 most important topics to be included in particle physics professional development programmes. The highest-ranked topics are cosmology, the Standard Model, and real-life applications of particle physics. All stakeholder groups agreed on the overall ranking of the topics. While the highest-ranked topics are again more theoretical, stakeholders also expect teachers to learn about experimental particle physics topics, which are ranked as medium importance topics.
The three studies addressed two research aims of this doctoral project. The first research aim was to explore to what extent particle physics is featured in high-school physics curricula. The comparison of the outcomes of the curricular review and the expert concept map showed that curricula cover significantly less than what experts expect high-school students to learn about particle physics. For example, most curricula do not include concepts that could be classified as experimental particle physics. However, the strong connections between the different concept show that experimental particle physics can be used as context for theoretical particle physics concepts, Nature of Science, and other curricular topics. In doing so, particle physics can be introduced in classrooms even though it is not (yet) explicitly mentioned in the respective curriculum.
The second research aim was to identify which aspects of content knowledge teachers are expected to learn about particle physics. The comparison of the Delphi study results to the outcomes of the curricular review and the expert concept map showed that stakeholders generally expect teachers to enhance their school knowledge as defined by the curricula. Furthermore, teachers are also expected to enhance their deeper school knowledge by learning how to connect concepts from their school knowledge to other concepts in particle physics and beyond. As such, professional development programmes that focus on enhancing teachers’ school knowledge and deeper school knowledge best support teachers in building relevant context in their instruction.
Overall, this doctoral research project reviewed the current state of high-school particle physics education and provided guidelines for future enhancements of the particle physics content in high-school student and teacher education. The outcomes of the project support further implementations of particle physics in high-school education both as explicit content and as context for other curricular topics. Furthermore, the mixed-methods approach and the outcomes of this research project lead to several implications for professional development programmes and science education research, that are discussed in the final chapters of this dissertation.
Ziel dieser Arbeit ist die Überwindung einer Differenz, die zwischen der Theorie der Phase bzw. der Phasendynamik und ihrer Anwendung in der Zeitreihenanalyse besteht: Während die theoretische Phase eindeutig bestimmt und invariant unter Koordinatentransformationen bzw. gegenüber der jeweils gewählten Observable ist, führen die Standardmethoden zur Abschätzung der Phase aus gegebenen Zeitreihen zu Resultaten, die einerseits von den gewählten Observablen abhängen und so andererseits das jeweilige System keineswegs in eindeutiger und invarianter Weise beschreiben. Um diese Differenz deutlich zu machen, wird die terminologische Unterscheidung von Phase und Protophase eingeführt: Der Terminus Phase wird nur für Variablen verwendet, die dem theoretischen Konzept der Phase entsprechen und daher das jeweilige System in invarianter Weise charakterisieren, während die observablen-abhängigen Abschätzungen der Phase aus Zeitreihen als Protophasen bezeichnet werden. Der zentrale Gegenstand dieser Arbeit ist die Entwicklung einer deterministischen Transformation, die von jeder Protophase eines selbsterhaltenden Oszillators zur eindeutig bestimmten Phase führt. Dies ermöglicht dann die invariante Beschreibung gekoppelter Oszillatoren und ihrer Wechselwirkung. Die Anwendung der Transformation bzw. ihr Effekt wird sowohl an numerischen Beispielen demonstriert - insbesondere wird die Phasentransformation in einem Beispiel auf den Fall von drei gekoppelten Oszillatoren erweitert - als auch an multivariaten Messungen des EKGs, des Pulses und der Atmung, aus denen Phasenmodelle der kardiorespiratorischen Wechselwirkung rekonstruiert werden. Abschließend wird die Phasentransformation für autonome Oszillatoren auf den Fall einer nicht vernachlässigbaren Amplitudenabhängigkeit der Protophase erweitert, was beispielsweise die numerischen Bestimmung der Isochronen des chaotischen Rössler Systems ermöglicht.
In the present work, we discuss two subjects related to the nonequilibrium dynamics of polymers or biological filaments adsorbed to two-dimensional substrates. The first part is dedicated to thermally activated dynamics of polymers on structured substrates in the presence or absence of a driving force. The structured substrate is represented by double-well or periodic potentials. We consider both homogeneous and point driving forces. Point-like driving forces can be realized in single molecule manipulation by atomic force microscopy tips. Uniform driving forces can be generated by hydrodynamic flow or by electric fields for charged polymers. In the second part, we consider collective filament motion in motility assays for motor proteins, where filaments glide over a motor-coated substrate. The model for the simulation of the filament dynamics contains interactive deformable filaments that move under the influence of forces from molecular motors and thermal noise. Motor tails are attached to the substrate and modeled as flexible polymers (entropic springs), motor heads perform a directed walk with a given force-velocity relation. We study the collective filament dynamics and pattern formation as a function of the motor and filament density, the force-velocity characteristics, the detachment rate of motor proteins and the filament interaction. In particular, the formation and statistics of filament patterns such as nematic ordering due to motor activity or clusters due to blocking effects are investigated. Our results are experimentally accessible and possible experimental realizations are discussed.
In this dissertation the lattice and the magnetic recovery dynamics of the two heavy rare-earth metals Dy and Gd after femtosecond photoexcitation are described. For the investigations, thin films of Dy and Gd were measured at low temperatures in the antiferromagnetic phase of Dy and close to room temperature in the ferromagnetic phase of Gd. Two different optical pump-x-ray probe techniques were employed: Ultrafast x-ray diffraction with hard x-rays (UXRD) yields the structural response of heavy rare-earth metals and resonant soft (elastic) x-ray diffraction (RSXD), which allows measuring directly changes in the helical antiferromagnetic order of Dy. The combination of both techniques enables to study the complex interaction between the magnetic and the phononic subsystems.
The purpose of Probabilistic Seismic Hazard Assessment (PSHA) at a construction site is to provide the engineers with a probabilistic estimate of ground-motion level that could be equaled or exceeded at least once in the structure’s design lifetime. A certainty on the predicted ground-motion allows the engineers to confidently optimize structural design and mitigate the risk of extensive damage, or in worst case, a collapse. It is therefore in interest of engineering, insurance, disaster mitigation, and security of society at large, to reduce uncertainties in prediction of design ground-motion levels.
In this study, I am concerned with quantifying and reducing the prediction uncertainty of regression-based Ground-Motion Prediction Equations (GMPEs). Essentially, GMPEs are regressed best-fit formulae relating event, path, and site parameters (predictor variables) to observed ground-motion values at the site (prediction variable). GMPEs are characterized by a parametric median (μ) and a non-parametric variance (σ) of prediction. μ captures the known ground-motion physics i.e., scaling with earthquake rupture properties (event), attenuation with distance from source (region/path), and amplification due to local soil conditions (site); while σ quantifies the natural variability of data that eludes μ. In a broad sense, the GMPE prediction uncertainty is cumulative of 1) uncertainty on estimated regression coefficients (uncertainty on μ,σ_μ), and 2) the inherent natural randomness of data (σ). The extent of μ parametrization, the quantity, and quality of ground-motion data used in a regression, govern the size of its prediction uncertainty: σ_μ and σ.
In the first step, I present the impact of μ parametrization on the size of σ_μ and σ. Over-parametrization appears to increase the σ_μ, because of the large number of regression coefficients (in μ) to be estimated with insufficient data. Under-parametrization mitigates σ_μ, but the reduced explanatory strength of μ is reflected in inflated σ. For an optimally parametrized GMPE, a ~10% reduction in σ is attained by discarding the low-quality data from pan-European events with incorrect parametric values (of predictor variables).
In case of regions with scarce ground-motion recordings, without under-parametrization, the only way to mitigate σ_μ is to substitute long-term earthquake data at a location with short-term samples of data across several locations – the Ergodic Assumption. However, the price of ergodic assumption is an increased σ, due to the region-to-region and site-to-site differences in ground-motion physics. σ of an ergodic GMPE developed from generic ergodic dataset is much larger than that of non-ergodic GMPEs developed from region- and site-specific non-ergodic subsets - which were too sparse to produce their specific GMPEs. Fortunately, with the dramatic increase in recorded ground-motion data at several sites across Europe and Middle-East, I could quantify the region- and site-specific differences in ground-motion scaling and upgrade the GMPEs with 1) substantially more accurate region- and site-specific μ for sites in Italy and Turkey, and 2) significantly smaller prediction variance σ. The benefit of such enhancements to GMPEs is quite evident in my comparison of PSHA estimates from ergodic versus region- and site-specific GMPEs; where the differences in predicted design ground-motion levels, at several sites in Europe and Middle-Eastern regions, are as large as ~50%.
Resolving the ergodic assumption with mixed-effects regressions is feasible when the quantified region- and site-specific effects are physically meaningful, and the non-ergodic subsets (regions and sites) are defined a priori through expert knowledge. In absence of expert definitions, I demonstrate the potential of machine learning techniques in identifying efficient clusters of site-specific non-ergodic subsets, based on latent similarities in their ground-motion data. Clustered site-specific GMPEs bridge the gap between site-specific and fully ergodic GMPEs, with their partially non-ergodic μ and, σ ~15% smaller than the ergodic variance.
The methodological refinements to GMPE development produced in this study are applicable to new ground-motion datasets, to further enhance certainty of ground-motion prediction and thereby, seismic hazard assessment. Advanced statistical tools show great potential in improving the predictive capabilities of GMPEs, but the fundamental requirement remains: large quantity of high-quality ground-motion data from several sites for an extended time-period.
In this thesis the interplay between hydrodynamic transport and specific adhesion is theoretically investigated. An important biological motivation for this work is the rolling adhesion of white blood cells experimentally investigated in flow chambers. There, specific adhesion is mediated by weak bonds between complementary molecular building blocks which are either located on the cell surface (receptors) or attached to the bottom plate of the flow chamber (ligands). The model system under consideration is a hard sphere covered with receptors moving above a planar ligand-bearing wall. The motion of the sphere is influenced by a simple shear flow, deterministic forces, and Brownian motion. An algorithm is given that allows to numerically simulate this motion as well as the formation and rupture of bonds between receptors and ligands. The presented algorithm spatially resolves receptors and ligands. This opens up the perspective to apply the results also to flow chamber experiments done with patterned substrates based on modern nanotechnological developments. In the first part the influence of flow rate, as well as of the number and geometry of receptors and ligands, on the probability for initial binding is studied. This is done by determining the mean time that elapses until the first encounter between a receptor and a ligand occurs. It turns out that besides the number of receptors, especially the height by which the receptors are elevated above the surface of the sphere plays an important role. These findings are in good agreement with observations of actual biological systems like white blood cells or malaria-infected red blood cells. Then, the influence of bonds which have formed between receptors and ligands, but easily rupture in response to force, on the motion of the sphere is studied. It is demonstrated that different states of motion-for example rolling-can be distinguished. The appearance of these states depending on important model parameters is then systematically investigated. Furthermore, it is shown by which bond property the ability of cells to stably roll in a large range of applied flow rates is increased. Finally, the model is applied to another biological process, the transport of spherical cargo particles by molecular motors. In analogy to the so far described systems molecular motors can be considered as bonds that are able to actively move. In this part of the thesis the mean distance the cargo particles are transported is determined.
This thesis presents new approaches to evolutions of binary black hole systems in numerical relativity. We analyze and compare evolutions from various physically motivated initial data sets, in particular presenting the first evolutions of Thin Sandwich data generated by the Meudon group. For the first time two different quasi-circular orbit initial data sequences are compared through fully 3d numerical evolutions: Puncture data and Thin Sandwich data (TSD) based on a helical killing vector ansatz. The two different sets are compared in terms of the physical quantities that can be measured from the numerical data, and in terms of their evolutionary behavior. The evolutions demonstrate that for the latter, "Meudon" datasets, the black holes do in fact orbit for a longer amount of time before they merge, in comparison with Puncture data from the same separation. This indicates they are potentially better estimates of quasi-circular orbit parameters. The merger times resulting from the numerical simulations are consistent with independent Post-Newtonian estimates that the final plunge phase of a black hole inspiral should take 60% of an orbit.
Stellar magnetic fields, as a crucial component of star formation and evolution, evade direct observation at least with current and near future instruments. However investigating whether magnetic fields are generated by a dynamo process or represent relics from the formation process, or whether they show a behavior similar to the sun or something very different, it is essential to investigate their structure and temporal evolution. Fortunately nature provides us with the possibility to indirectly observe surface topologies on distant stars by means of Doppler shift and polarization of light, though not without its challenges. Based on the mentioned effects, the so called Zeeman-Doppler Imaging technique is a powerful method to retrieve magnetic fields from rapid rotating stars based on measurements of spectropolarimetric observations in terms of Stokes profiles. In recent years, a large number of stellar magnetic field distributions could be reconstructed by Zeeman-Doppler Imaging (ZDI). However, the implementation of this method often relies on many approximations because, as an inversion method, it entails enormous computational requirements. The aim of this thesis is to develop methods for a ZDI, designed to invert time-resolved spectropolarimetric data of active late type stars, and to account for the expected complex and small scale magnetic fields on these stars. In order to reliably reconstruct the detailed field orientation and strength, the inversion method is employed to be able to use of all four Stokes components. Furthermore it is based on fully polarized radiative transfer calculations to account for the intricate interplay between temperature and magnetic field. Finally, the application of a newly developed ZDI code to Stokes I and V observations of II Pegasi (short: II Peg) was supposed to deliver the first magnetic surface maps for this highly active star. To accomplish the high computational burden of a radiative transfer based ZDI, we developed a novel approximation method to speed up the inversion process. It is based on Principal Component Analysis and Artificial Neural Networks. The latter approximate the functional mapping between atmospheric parameters and the corresponding local Stokes profiles. Inverse problems, as we are dealing with, are potentially ill-posed and require a regularization method. We propose a new regularization scheme, which implements a local entropy function that accounts for the peculiarities of the reconstruction of localized magnetic fields. To deal with the relatively large noise that is always present in polarimetric data, we developed a multi-line denoising technique based on Principal Component Analysis. In contrast to other multi-line techniques that extract from a large number of spectral lines a sort of mean profile, this method allows to extract individual spectral lines and thus allows for an inversion on the basis of specific lines. All these methods are incorporated in our newly developed ZDI code iMap, which is based on a conjugated gradient method. An in depth validation of our new synthesis method demonstrates the reliability and accuracy of this approach as well as a gain in computation time by almost three orders of magnitude relative to the conventional radiative transfer calculations. We investigated the influence of the different Stokes components (IV / IVQU) on the ability to reconstruct a known synthetic field configuration. In doing so we validate the capability of our inversion code, and we also assess limitations of magnetic field inversions in general. In a first application to II Peg, a K2 IV subgiant, we derived temperature and magnetic field surface distributions from spectropolarimetric data obtained in 2004 and 2007. It gives for the first time simultaneously the temporal evolution of the surface temperature and magnetic field distribution on II Peg.
Uncertainties are pervasive in the Earth System modelling. This is not just due to a lack of knowledge about physical processes but has its seeds in intrinsic, i.e. inevitable and irreducible, uncertainties concerning the process of modelling as well. Therefore, it is indispensable to quantify uncertainty in order to determine, which are robust results under this inherent uncertainty. The central goal of this thesis is to explore how uncertainties map on the properties of interest such as phase space topology and qualitative dynamics of the system. We will address several types of uncertainty and apply methods of dynamical systems theory on a trendsetting field of climate research, i.e. the Indian monsoon. For the systematic analysis concerning the different facets of uncertainty, a box model of the Indian monsoon is investigated, which shows a saddle node bifurcation against those parameters that influence the heat budget of the system and that goes along with a regime shift from a wet to a dry summer monsoon. As some of these parameters are crucially influenced by anthropogenic perturbations, the question is whether the occurrence of this bifurcation is robust against uncertainties in parameters and in the number of considered processes and secondly, whether the bifurcation can be reached under climate change. Results indicate, for example, the robustness of the bifurcation point against all considered parameter uncertainties. The possibility of reaching the critical point under climate change seems rather improbable. A novel method is applied for the analysis of the occurrence and the position of the bifurcation point in the monsoon model against parameter uncertainties. This method combines two standard approaches: a bifurcation analysis with multi-parameter ensemble simulations. As a model-independent and therefore universal procedure, this method allows investigating the uncertainty referring to a bifurcation in a high dimensional parameter space in many other models. With the monsoon model the uncertainty about the external influence of El Niño / Southern Oscillation (ENSO) is determined. There is evidence that ENSO influences the variability of the Indian monsoon, but the underlying physical mechanism is discussed controversially. As a contribution to the debate three different hypotheses are tested of how ENSO and the Indian summer monsoon are linked. In this thesis the coupling through the trade winds is identified as key in linking these two key climate constituents. On the basis of this physical mechanism the observed monsoon rainfall data can be reproduced to a great extent. Moreover, this mechanism can be identified in two general circulation models (GCMs) for the present day situation and for future projections under climate change. Furthermore, uncertainties in the process of coupling models are investigated, where the focus is on a comparison of forced dynamics as opposed to fully coupled dynamics. The former describes a particular type of coupling, where the dynamics from one sub-module is substituted by data. Intrinsic uncertainties and constraints are identified that prevent the consistency of a forced model with its fully coupled counterpart. Qualitative discrepancies between the two modelling approaches are highlighted, which lead to an overestimation of predictability and produce artificial predictability in the forced system. The results suggest that bistability and intermittent predictability, when found in a forced model set-up, should always be cross-validated with alternative coupling designs before being taken for granted. All in this, this thesis contributes to the fundamental issue of dealing with uncertainties the climate modelling community is confronted with. Although some uncertainties allow for including them in the interpretation of the model results, intrinsic uncertainties could be identified, which are inevitable within a certain modelling paradigm and are provoked by the specific modelling approach.
In dieser Arbeit wurden Nano-Elektroden-Arrays zur Einzel-Objekt-Immobilisierung mittels Dielektrophorese verwendet. Hierbei wurden fluoreszenzmarkierte Nano-Sphären als Modellsystem untersucht und die gewonnenen Ergebnisse auf biologische Proben übertragen. Die Untersuchungen in Kombination mit verschiedenen Elektrodenlayouts führten zu einer deterministischen Vereinzelung der Nano-Sphären ab einem festen Größenverhältnis zwischen Nano-Sphäre und Durchmesser der Elektrodenspitzen. An den Proteinen BSA und R-PE konnte eine dielektrophoretische Immobilisierung ebenfalls demonstriert und R-PE Moleküle zur Vereinzelung gebracht werden. Hierfür war neben einem optimierten Elektrodenlayout, das durch Feldsimulationen den Feldgradienten betreffend gesucht wurde, eine Optimierung der Feldparameter, insbesondere von Spannung und Frequenz, erforderlich.
Neben der Dielektrophorese erfolgten auch Beobachtungen anderer Effekte des elektrischen Feldes, wie z.B. Elektrolyse an Nano-Elektroden und Strömungen über dem Elektroden-Array, hervorgerufen durch Joulesche Wärme und AC-elektroosmotischen Fluss. Zudem konnte Dielektrophorese an Silberpartikeln beobachtet werden und mittels Fluoreszenz-, Atom-Kraft-, Raster-Elektronen-Mikroskopie und energiedispersiver Röntgenspektroskopie untersucht werden. Schließlich wurden die verwendeten Objektive und Kameras auf ihre Lichtempfindlichkeit hin analysiert, so dass die Vereinzelung von Biomolekülen an Nano-Elektroden nachweisbar war.
Festzuhalten bleibt also, dass die Vereinzelung von Nano-Objekten und Biomolekülen an Nano-Elektroden-Arrays gelungen ist. Durch den parallelen Ansatz erlaubt dies, Aussagen über das Verhalten von Einzelmolekülen mit guter Statistik zu treffen.
Organic bulk heterojunction (BHJ) solar cells based on polymer:fullerene blends are a promising alternative for a low-cost solar energy conversion. Despite significant improvements of the power conversion efficiency in recent years, the fundamental working principles of these devices are yet not fully understood. In general, the current output of organic solar cells is determined by the generation of free charge carriers upon light absorption and their transport to the electrodes in competition to the loss of charge carriers due to recombination.
The object of this thesis is to provide a comprehensive understanding of the dynamic processes and physical parameters determining the performance. A new approach for analyzing the characteristic current-voltage output was developed comprising the experimental determination of the efficiencies of charge carrier generation, recombination and transport, combined with numerical device simulations.
Central issues at the beginning of this work were the influence of an electric field on the free carrier generation process and the contribution of generation, recombination and transport to the current-voltage characteristics. An elegant way to directly measure the field dependence of the free carrier generation is the Time Delayed Collection Field (TDCF) method. In TDCF charge carriers are generated by a short laser pulse and subsequently extracted by a defined rectangular voltage pulse. A new setup was established with an improved time resolution compared to former reports in literature. It was found that charge generation is in general independent of the electric field, in contrast to the current view in literature and opposed to the expectations of the Braun-Onsager model that was commonly used to describe the charge generation process. Even in cases where the charge generation was found to be field-dependend, numerical modelling showed that this field-dependence is in general not capable to account for the voltage dependence of the photocurrent. This highlights the importance of efficient charge extraction in competition to non-geminate recombination, which is the second objective of the thesis.
Therefore, two different techniques were combined to characterize the dynamics and efficiency of non-geminate recombination under device-relevant conditions. One new approach is to perform TDCF measurements with increasing delay between generation and extraction of charges. Thus, TDCF was used for the first time to measure charge carrier generation, recombination and transport with the same experimental setup. This excludes experimental errors due to different measurement and preparation conditions and demonstrates the strength of this technique. An analytic model for the description of TDCF transients was developed and revealed the experimental conditions for which reliable results can be obtained. In particular, it turned out that the $RC$ time of the setup which is mainly given by the sample geometry has a significant influence on the shape of the transients which has to be considered for correct data analysis.
Secondly, a complementary method was applied to characterize charge carrier recombination under steady state bias and illumination, i.e. under realistic operating conditions. This approach relies on the precise determination of the steady state carrier densities established in the active layer. It turned out that current techniques were not sufficient to measure carrier densities with the necessary accuracy. Therefore, a new technique {Bias Assisted Charge Extraction} (BACE) was developed. Here, the charge carriers photogenerated under steady state illumination are extracted by applying a high reverse bias. The accelerated extraction compared to conventional charge extraction minimizes losses through non-geminate recombination and trapping during extraction. By performing numerical device simulations under steady state, conditions were established under which quantitative information on the dynamics can be retrieved from BACE measurements.
The applied experimental techniques allowed to sensitively analyse and quantify geminate and non-geminate recombination losses along with charge transport in organic solar cells. A full analysis was exemplarily demonstrated for two prominent polymer-fullerene blends.
The model system P3HT:PCBM spincast from chloroform (as prepared) exhibits poor power conversion efficiencies (PCE) on the order of 0.5%, mainly caused by low fill factors (FF) and currents. It could be shown that the performance of these devices is limited by the hole transport and large bimolecular recombination (BMR) losses, while geminate recombination losses are insignificant. The low polymer crystallinity and poor interconnection between the polymer and fullerene domains leads to a hole mobility of the order of 10^-7 cm^2/Vs which is several orders of magnitude lower than the electron mobility in these devices. The concomitant build up of space charge hinders extraction of both electrons and holes and promotes bimolecular recombination losses.
Thermal annealing of P3HT:PCBM blends directly after spin coating improves crystallinity and interconnection of the polymer and the fullerene phase and results in comparatively high electron and hole mobilities in the order of 10^-3 cm^2/Vs and 10^-4 cm^2/Vs, respectively. In addition, a coarsening of the domain sizes leads to a reduction of the BMR by one order of magnitude. High charge carrier mobilities and low recombination losses result in comparatively high FF (>65%) and short circuit current (J_SC ≈ 10 mA/cm^2). The overall device performance (PCE ≈ 4%) is only limited by a rather low spectral overlap of absorption and solar emission and a small V_OC, given by the energetics of the P3HT.
From this point of view the combination of the low bandgap polymer PTB7 with PCBM is a promising approach. In BHJ solar cells, this polymer leads to a higher V_OC due to optimized energetics with PCBM. However, the J_SC in these (unoptimized) devices is similar to the J_SC in the optimized blend with P3HT and the FF is rather low (≈ 50%). It turned out that the unoptimized PTB7:PCBM blends suffer from high BMR, a low electron mobility of the order of 10^-5 cm^2/Vs and geminate recombination losses due to field dependent charge carrier generation.
The use of the solvent additive DIO optimizes the blend morphology, mainly by suppressing the formation of very large fullerene domains and by forming a more uniform structure of well interconnected donor and acceptor domains of the order of a few nanometers. Our analysis shows that this results in an increase of the electron mobility by about one order of magnitude (3 x 10^-4 cm^2/Vs), while BMR and geminate recombination losses are significantly reduced. In total these effects improve the J_SC (≈ 17 mA/cm^2) and the FF (> 70%). In 2012 this polymer/fullerene combination resulted in a record PCE for a single junction OSC of 9.2%.
Remarkably, the numerical device simulations revealed that the specific shape of the J-V characteristics depends very sensitively to the variation of not only one, but all dynamic parameters. On the one hand this proves that the experimentally determined parameters, if leading to a good match between simulated and measured J-V curves, are realistic and reliable. On the other hand it also emphasizes the importance to consider all involved dynamic quantities, namely charge carrier generation, geminate and non-geminate recombination as well as electron and hole mobilities. The measurement or investigation of only a subset of these parameters as frequently found in literature will lead to an incomplete picture and possibly to misleading conclusions.
Importantly, the comparison of the numerical device simulation employing the measured parameters and the experimental $J-V$ characteristics allows to identify loss channels and limitations of OSC. For example, it turned out that inefficient extraction of charge carriers is a criticical limitation factor that is often disobeyed. However, efficient and fast transport of charges becomes more and more important with the development of new low bandgap materials with very high internal quantum efficiencies. Likewise, due to moderate charge carrier mobilities, the active layer thicknesses of current high-performance devices are usually limited to around 100 nm. However, larger layer thicknesses would be more favourable with respect to higher current output and robustness of production. Newly designed donor materials should therefore at best show a high tendency to form crystalline structures, as observed in P3HT, combined with the optimized energetics and quantum efficiency of, for example, PTB7.
Movements of processive cytoskeletal motors are characterized by an interplay between directed motion along filament and diffusion in the surrounding solution. In the present work, these peculiar movements are studied by modeling them as random walks on a lattice. An additional subject of our studies is the effect of motor-motor interactions on these movements. In detail, four transport phenomena are studied: (i) Random walks of single motors in compartments of various geometries, (ii) stationary concentration profiles which build up as a result of these movements in closed compartments, (iii) boundary-induced phase transitions in open tube-like compartments coupled to reservoirs of motors, and (iv) the influence of cooperative effects in motor-filament binding on the movements. All these phenomena are experimentally accessible and possible experimental realizations are discussed.
Im Rahmen dieser Arbeit wurde ein besseres Verständnis der Kopplung der Troposphäre und der Stratosphäre in den mittleren und polaren Breiten der Nordhemisphäre (NH) auf Monatszeitskalen erzielt, die auf die Ausbreitung von quasi-stationären Wellen zurückzuführen ist. Der Schwerpunkt lag dabei auf den dynamisch aktiven Wintermonaten, welche die grösste Variabilität aufweisen. Die troposphärische Variabilität wird zum Grossteil durch bevorzugte Zirkulationsstrukturen, den Telekonnexionsmustern, bestimmt. Mittels einer rotierten EOF-Analyse der geopotenziellen Höhe in 500 hPa wurden die wichtigsten regionalen troposphärischen Telekonnexionsmuster der Nordhemisphäre berechnet. Diese lassen sich drei grossen geografischen Regionen zuordnen; dem nordatlantisch-europäischen Raum, Eurasien und dem pazifisch-nordamerikanischen Raum. Da es sich um die stärksten troposphärischen Variabilitätsmuster handelt, wurden sie als grundlegende troposphärische Grössen herangezogen, um dynamische Zusammenhänge zwischen der troposphärischen und der stratosphärischen Zirkulation zu untersuchen. Dabei wurde anhand von instantanen und zeitverzögerten Korrelationsanalysen der troposphärischen Muster mit stratosphärischen Variablen erstmalig gezeigt, dass unterschiedliche regionale troposphärische Telekonnexionsmuster unterschiedliche Auswirkungen auf die stratosphärische Zirkulation haben. Es ergaben sich für die pazifisch-nordamerikanischen Muster signifikante instantane Korrelationen mit quasi-barotropen Musterstrukturen und für die nordatlantisch-europäischen Muster zonalsymmetrische Ringstrukturen ab 1978 mit signifikanten Korrelationswerten über tropischen und subtropischen Breiten und inversen Korrelationswerten über polaren Gebieten. Bei einer Untersuchung des Einflusses der stratosphärischen Variabilität wurde gezeigt, dass sich die stärkste Kopplung von nordatlantisch-europäischen Telekonnexionsmustern mit der stratosphärischen Zirkulation bei einem in Richtung Europa verschobenen Polarwirbel ergibt, wodurch die signifikanten Korrelationen ab 1978 erklärt werden können. Eine zonal gemittelte und vor allem lokale Untersuchung der Wellenausbreitungsbedingungen während dieser stratosphärischen Situation zeigt, dass es zu schwächeren Windgeschwindigkeiten in der Stratosphäre im Bereich von Nordamerika und des westlichen Nordatlantiks kommt und sich dadurch die Wellenausbreitungsbedingungen in diesem geografischen Bereich für planetare Wellen verbessern. Durch die stärkere Wellenausbreitung kommt es zu einer stärkeren Wechselwirkung mit dem Polarjet, wobei dieser abgebremst wird. Diese Abbremsung führt zu einer Verstärkung der meridionalen Residualzirkulation. D. h., wenn es zu einer verstärkten Wellenanregung im Nordatlantik und über Europa kommt, ist die Reaktion der Residualzirkulation bei einem nach Europa verschobenem Polarwirbel besonders stark. Die quasi-barotropen Korrelationsstrukturen, die sich bei den pazifisch-nordamerikanischen Mustern zeigen, weisen aufgrund von abnehmenden Störungsamplituden mit zunehmender Höhe, keiner Westwärtsneigung und einem negativen Brechungsindex im Pazifik auf verschwindende Wellen hin, die als Lösung der Wellengleichung bei negativem Brechungsindex auftreten. Dies wird durch den Polarjet, der im Bereich des Pazifiks stets sehr weit in Richtung Norden verlagert ist, verursacht. Abschliessend wurde in dieser Arbeit untersucht, ob die gefundenen Zusammenhänge von nordatlantisch-europäischen Telekonnexionsmustern mit der stratosphärischen Zirkulation auch von einem Atmosphärenmodell wiedergegeben werden können. Dazu wurde ein transienter 40-Jahre-Klimalauf des ECHAM4.L39(DLR)/CHEM Modells mit möglichst realistischen Antrieben erstmalig auf die Kopplung der Troposphäre und der Stratosphäre analysiert. Dabei konnten sowohl die troposphärischen, als auch die stratosphärischen Variabilitätsmuster vom Modell simuliert werden. Allerdings zeigen sich in den stratosphärischen Mustern Phasenverschiebungen in den Wellenzahl-1-Strukturen und ihre Zeitreihen weisen keinen signifikanten Trend ab 1978 auf. Die Kopplung der nordatlantisch-europäischen Telekonnexionsmuster mit der stratosphärischen Zirkulation zeigt eine wesentlich schwächere Reaktion der meridionalen Residualzirkulation. Somit stellte sich heraus, dass insbesondere die stratosphärische Zirkulation im Modell starke Diskrepanzen zu den Beobachtungen zeigt, die wiederum Einfluss auf die Wellenausbreitungsbedingungen haben. Es wird damit deutlich, dass für eine richtige Wiedergabe der Wellenausbreitung und somit der Kopplung der Troposphäre und Stratosphäre die stratosphärische Zirkulation eine wichtige Rolle spielt.
Stochastic information, to be understood as "information gained by the application of stochastic methods", is proposed as a tool in the assessment of changes in climate. This thesis aims at demonstrating that stochastic information can improve the consideration and reduction of uncertainty in the assessment of changes in climate. The thesis consists of three parts. In part one, an indicator is developed that allows the determination of the proximity to a critical threshold. In part two, the tolerable windows approach (TWA) is extended to a probabilistic TWA. In part three, an integrated assessment of changes in flooding probability due to climate change is conducted within the TWA. The thermohaline circulation (THC) is a circulation system in the North Atlantic, where the circulation may break down in a saddle-node bifurcation under the influence of climate change. Due to uncertainty in ocean models, it is currently very difficult to determine the distance of the THC to the bifurcation point. We propose a new indicator to determine the system's proximity to the bifurcation point by considering the THC as a stochastic system and using the information contained in the fluctuations of the circulation around the mean state. As the system is moved closer to the bifurcation point, the power spectrum of the overturning becomes "redder", i.e. more energy is contained in the low frequencies. Since the spectral changes are a generic property of the saddle-node bifurcation, the method is not limited to the THC, but it could also be applicable to other systems, e.g. transitions in ecosystems. In part two, a probabilistic extension to the tolerable windows approach (TWA) is developed. In the TWA, the aim is to determine the complete set of emission strategies that are compatible with so-called guardrails. Guardrails are limits to impacts of climate change or to climate change itself. Therefore, the TWA determines the "maneuvering space" humanity has, if certain impacts of climate change are to be avoided. Due to uncertainty it is not possible to definitely exclude the impacts of climate change considered, but there will always be a certain probability of violating a guardrail. Therefore the TWA is extended to a probabilistic TWA that is able to consider "probabilistic uncertainty", i.e. uncertainty that can be expressed as a probability distribution or uncertainty that arises through natural variability. As a first application, temperature guardrails are imposed, and the dependence of emission reduction strategies on probability distributions for climate sensitivities is investigated. The analysis suggests that it will be difficult to observe a temperature guardrail of 2°C with high probabilities of actually meeting the target. In part three, an integrated assessment of changes in flooding probability due to climate change is conducted. A simple hydrological model is presented, as well as a downscaling scheme that allows the reconstruction of the spatio-temporal natural variability of temperature and precipitation. These are used to determine a probabilistic climate impact response function (CIRF), a function that allows the assessment of changes in probability of certain flood events under conditions of a changed climate. The assessment of changes in flooding probability is conducted in 83 major river basins. Not all floods can be considered: Events that either happen very fast, or affect only a very small area can not be considered, but large-scale flooding due to strong longer-lasting precipitation events can be considered. Finally, the probabilistic CIRFs obtained are used to determine emission corridors, where the guardrail is a limit to the fraction of world population that is affected by a predefined shift in probability of the 50-year flood event. This latter analysis has two main results. The uncertainty about regional changes in climate is still very high, and even small amounts of further climate change may lead to large changes in flooding probability in some river systems.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
In the course of this thesis gold nanoparticle/polyelectrolyte multilayer structures were prepared, characterized, and investigated according to their static and ultrafast optical properties. Using the dip-coating or spin-coating layer-by-layer deposition method, gold-nanoparticle layers were embedded in a polyelectrolyte environment with high structural perfection. Typical structures exhibit four repetition units, each consisting of one gold-particle layer and ten double layers of polyelectrolyte (cationic+anionic polyelectrolyte). The structures were characterized by X-ray reflectivity measurements, which reveal Bragg peaks up to the seventh order, evidencing the high stratication of the particle layers. In the same measurements pronounced Kiessig fringes were observed, which indicate a low global roughness of the samples. Atomic force microscopy (AFM) images veried this low roughness, which results from the high smoothing capabilities of polyelectrolyte layers. This smoothing effect facilitates the fabrication of stratified nanoparticle/polyelectrolyte multilayer structures, which were nicely illustrated in a transmission electron microscopy image. The samples' optical properties were investigated by static spectroscopic measurements in the visible and UV range. The measurements revealed a frequency shift of the reflectance and of the plasmon absorption band, depending on the thickness of the polyelectrolyte layers that cover a nanoparticle layer. When the covering layer becomes thicker than the particle interaction range, the absorption spectrum becomes independent of the polymer thickness. However, the reflectance spectrum continues shifting to lower frequencies (even for large thicknesses). The range of plasmon interaction was determined to be in the order of the particle diameter for 10 nm, 20 nm, and 150 nm particles. The transient broadband complex dielectric function of a multilayer structure was determined experimentally by ultrafast pump-probe spectroscopy. This was achieved by simultaneous measurements of the changes in the reflectance and transmittance of the excited sample over a broad spectral range. The changes in the real and imaginary parts of the dielectric function were directly deduced from the measured data by using a recursive formalism based on the Fresnel equations. This method can be applied to a broad range of nanoparticle systems where experimental data on the transient dielectric response are rare. This complete experimental approach serves as a test ground for modeling the dielectric function of a nanoparticle compound structure upon laser excitation.
Magnetotactic bacteria possess an intracellular structure called the magnetosome chain. Magnetosome chains contain nano−particles of iron crystals enclosed by a membrane and aligned on a cytoskeletal filament. Due to the presence of the magnetosome chains, magnetotactic bacteria are able to orient and swim along the magnetic field lines. A detailed study of structural properties of magnetosome chains in magnetotactic bacteria has primary scientific interests. It can provide more insight into the formation of the cytoskeleton in bacteria. In this thesis, we develop a new framework to study the structural properties of magnetosome chains in magnetotactic bacteria.
First, we address the bending stiffness of magnetosome chains resulting from two main contributions: the magnetic interactions of magnetosome particles and the bending stiffness of the cytoskeletal filament to which the magnetosomes are anchored. Our analysis indicates that the linear configuration of magnetosome particles without the stabilisation to the cytoskeleton may close to ring like structures, with no net magnetic moment, which thus can not perform as a compass in cellular navigation. As a result we think that one of the roles of the filament is to stabilize the linear configuration against ring closure.
We then investigate the equilibrium configurations of magnetosome particles including linear chain and closed−ring structures. We notably observe that for the formation of a stable linear structure on the cytoskeletal filament, presence of a binding energy is needed. In the presence of external stimuli the stability of the magnetosome chain is due to the internal dipole−dipole interactions, the stiffness and the binding energy of the protein structure connecting the magnetosome particles to the filament. Our observations, during and after the treatment of the magnetosome chain with the external magnetic field substantiates the stabilisation of magnetosome chains to the cytoskeletal filament by proteinous linkers and the dynamic feature of these structures.
Finally, we employ our model to study the FMR spectra of magnetosome chains in a single cell of magnetotactic bacteria. We explore the effect of magnetocrystalline anisotropy in three-fold symmetry observed in FMR spectra and the peculiarity of different spectra arisen from different mutants of these bacteria.
Arctic climate change is marked by intensified warming compared to global trends and a significant reduction in Arctic sea ice which can intricately influence mid-latitude atmospheric circulation through tropo- and stratospheric pathways. Achieving accurate simulations of current and future climate demands a realistic representation of Arctic climate processes in numerical climate models, which remains challenging.
Model deficiencies in replicating observed Arctic climate processes often arise due to inadequacies in representing turbulent boundary layer interactions that determine the interactions between the atmosphere, sea ice, and ocean. Many current climate models rely on parameterizations developed for mid-latitude conditions to handle Arctic turbulent boundary layer processes.
This thesis focuses on modified representation of the Arctic atmospheric processes and understanding their resulting impact on large-scale mid-latitude atmospheric circulation within climate models. The improved turbulence parameterizations, recently developed based on Arctic measurements, were implemented in the global atmospheric circulation model ECHAM6. This involved modifying the stability functions over sea ice and ocean for stable stratification and changing the roughness length over sea ice for all stratification conditions. Comprehensive analyses are conducted to assess the impacts of these modifications on ECHAM6's simulations of the Arctic boundary layer, overall atmospheric circulation, and the dynamical pathways between the Arctic and mid-latitudes.
Through a step-wise implementation of the mentioned parameterizations into ECHAM6, a series of sensitivity experiments revealed that the combined impacts of the reduced roughness length and the modified stability functions are non-linear. Nevertheless, it is evident that both modifications consistently lead to a general decrease in the heat transfer coefficient, being in close agreement with the observations.
Additionally, compared to the reference observations, the ECHAM6 model falls short in accurately representing unstable and strongly stable conditions.
The less frequent occurrence of strong stability restricts the influence of the modified stability functions by reducing the affected sample size. However, when focusing solely on the specific instances of a strongly stable atmosphere, the sensible heat flux approaches near-zero values, which is in line with the observations. Models employing commonly used surface turbulence parameterizations were shown to have difficulties replicating the near-zero sensible heat flux in strongly stable stratification.
I also found that these limited changes in surface layer turbulence parameterizations have a statistically significant impact on the temperature and wind patterns across multiple pressure levels, including the stratosphere, in both the Arctic and mid-latitudes. These significant signals vary in strength, extent, and direction depending on the specific month or year, indicating a strong reliance on the background state.
Furthermore, this research investigates how the modified surface turbulence parameterizations may influence the response of both stratospheric and tropospheric circulation to Arctic sea ice loss.
The most suitable parameterizations for accurately representing Arctic boundary layer turbulence were identified from the sensitivity experiments. Subsequently, the model's response to sea ice loss is evaluated through extended ECHAM6 simulations with different prescribed sea ice conditions.
The simulation with adjusted surface turbulence parameterizations better reproduced the observed Arctic tropospheric warming in vertical extent, demonstrating improved alignment with the reanalysis data. Additionally, unlike the control experiments, this simulation successfully reproduced specific circulation patterns linked to the stratospheric pathway for Arctic-mid-latitude linkages. Specifically, an increased occurrence of the Scandinavian-Ural blocking regime (negative phase of the North Atlantic Oscillation) in early (late) winter is observed. Overall, it can be inferred that improving turbulence parameterizations at the surface layer can improve the ECHAM6's response to sea ice loss.
The increasing number of known exoplanets raises questions about their demographics and the mechanisms that shape planets into how we observe them today. Young planets in close-in orbits are exposed to harsh environments due to the host star being magnetically highly active, which results in high X-ray and extreme UV fluxes impinging on the planet. Prolonged exposure to this intense photoionizing radiation can cause planetary atmospheres to heat up, expand and escape into space via a hydrodynamic escape process known as photoevaporation. For super-Earth and sub-Neptune-type planets, this can even lead to the complete erosion of their primordial gaseous atmospheres. A factor of interest for this particular mass-loss process is the activity evolution of the host star. Stellar rotation, which drives the dynamo and with it the magnetic activity of a star, changes significantly over the stellar lifetime. This strongly affects the amount of high-energy radiation received by a planet as stars age. At a young age, planets still host warm and extended envelopes, making them particularly susceptible to atmospheric evaporation. Especially in the first gigayear, when X-ray and UV levels can be 100 - 10,000 times higher than for the present-day sun, the characteristics of the host star and the detailed evolution of its high-energy emission are of importance.
In this thesis, I study the impact of stellar activity evolution on the high-energy-induced atmospheric mass loss of young exoplanets. The PLATYPOS code was developed as part of this thesis to calculate photoevaporative mass-loss rates over time. The code, which couples parameterized planetary mass-radius relations with an analytical hydrodynamic escape model, was used, together with Chandra and eROSITA X-ray observations, to investigate the future mass loss of the two young multiplanet systems V1298 Tau and K2-198. Further, in a numerical ensemble study, the effect of a realistic spread of activity tracks on the small-planet radius gap was investigated for the first time. The works in this thesis show that for individual systems, in particular if planetary masses are unconstrained, the difference between a young host star following a low-activity track vs. a high-activity one can have major implications: the exact shape of the activity evolution can determine whether a planet can hold on to some of its atmosphere, or completely loses its envelope, leaving only the bare rocky core behind. For an ensemble of simulated planets, an observationally-motivated distribution of activity tracks does not substantially change the final radius distribution at ages of several gigayears. My simulations indicate that the overall shape and slope of the resulting small-planet radius gap is not significantly affected by the spread in stellar activity tracks. However, it can account for a certain scattering or fuzziness observed in and around the radius gap of the observed exoplanet population.
Most of the matter in the universe consists of hydrogen. The hydrogen in the intergalactic medium (IGM), the matter between the galaxies, underwent a change of its ionisation state at the epoch of reionisation, at a redshift roughly between 6>z>10, or ~10^8 years after the Big Bang. At this time, the mostly neutral hydrogen in the IGM was ionised but the source of the responsible hydrogen ionising emission remains unclear. In this thesis I discuss the most likely candidates for the emission of this ionising radiation, which are a type of galaxy called Lyman alpha emitters (LAEs). As implied by their name, they emit Lyman alpha radiation, produced after a hydrogen atom has been ionised and recombines with a free electron. The ionising radiation itself (also called Lyman continuum emission) which is needed for this process inside the LAEs could also be responsible for ionising the IGM around those galaxies at the epoch of reionisation, given that enough Lyman continuum escapes. Through this mechanism, Lyman alpha and Lyman continuum radiation are closely linked and are both studied to better understand the properties of high redshift galaxies and the reionisation state of the universe.
Before I can analyse their Lyman alpha emission lines and the escape of Lyman continuum emission from them, the first step is the detection and correct classification of LAEs in integral field spectroscopic data, specifically taken with the Multi-Unit Spectroscopic Explorer (MUSE). After detecting emission line objects in the MUSE data, the task of classifying them and determining their redshift is performed with the graphical user interface QtClassify, which I developed during the work on this thesis. It uses the strength of the combination of spectroscopic and photometric information that integral field spectroscopy offers to enable the user to quickly identify the nature of the detected emission lines. The reliable classification of LAEs and determination of their redshifts is a crucial first step towards an analysis of their properties.
Through radiative transfer processes, the properties of the neutral hydrogen clouds in and around LAEs are imprinted on the shape of the Lyman alpha line. Thus after identifying the LAEs in the MUSE data, I analyse the properties of the Lyman alpha emission line, such as the equivalent width (EW) distribution, the asymmetry and width of the line as well as the double peak fraction. I challenge the common method of displaying EW distributions as histograms without taking the limits of the survey into account and construct a more independent EW distribution function that better reflects the properties of the underlying population of galaxies. I illustrate this by comparing the fraction of high EW objects between the two surveys MUSE-Wide and MUSE-Deep, both consisting of MUSE pointings (each with the size of one square arcminute) of different depths. In the 60 MUSE-Wide fields of one hour exposure time I find a fraction of objects with extreme EWs above EW_0>240A of ~20%, while in the MUSE-Deep fields (9 fields with an exposure time of 10 hours and one with an exposure time of 31 hours) I find a fraction of only ~1%, which is due to the differences in the limiting line flux of the surveys. The highest EW I measure is EW_0 = 600.63 +- 110A, which hints at an unusual underlying stellar population, possibly with a very low metallicity.
With the knowledge of the redshifts and positions of the LAEs detected in the MUSE-Wide survey, I also look for Lyman continuum emission coming from these galaxies and analyse the connection between Lyman continuum emission and Lyman alpha emission. I use ancillary Hubble Space Telescope (HST) broadband photometry in the bands that contain the Lyman continuum and find six Lyman continuum leaker candidates. To test whether the Lyman continuum emission of LAEs is coming only from those individual objects or the whole population, I select LAEs that are most promising for the detection of Lyman continuum emission, based on their rest-frame UV continuum and Lyman alpha line shape properties. After this selection, I stack the broadband data of the resulting sample and detect a signal in Lyman continuum with a significance of S/N = 5.5, pointing towards a Lyman continuum escape fraction of ~80%. If the signal is reliable, it strongly favours LAEs as the providers of the hydrogen ionising emission at the epoch of reionisation and beyond.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
Advancing charge selective contacts for efficient monolithic perovskite-silicon tandem solar cells
(2019)
Hybrid organic-inorganic perovskites are one of the most promising material classes for photovoltaic energy conversion. In solar cells, the perovskite absorber is sandwiched between n- and p-type contact layers which selectively transport electrons and holes to the cell’s cathode and anode, respectively. This thesis aims to advance contact layers in perovskite solar cells and unravel the impact of interface and contact properties on the device performance. Further, the contact materials are applied in monolithic perovskite-silicon heterojunction (SHJ) tandem solar cells, which can overcome the single junction efficiency limits and attract increasing attention. Therefore, all contact layers must be highly transparent to foster light harvesting in the tandem solar cell design. Besides, the SHJ device restricts processing temperatures for the selective contacts to below 200°C.
A comparative study of various electron selective contact materials, all processed below 180°C, in n-i-p type perovskite solar cells highlights that selective contacts and their interfaces to the absorber govern the overall device performance. Combining fullerenes and metal-oxides in a TiO2/PC60BM (phenyl-C60-butyric acid methyl ester) double-layer contact allows to merge good charge extraction with minimized interface recombination. The layer sequence thereby achieved high stabilized solar cell performances up to 18.0% and negligible current-voltage hysteresis, an otherwise pronounced phenomenon in this device design. Double-layer structures are therefore emphasized as a general concept to establish efficient and highly selective contacts.
Based on this success, the concept to combine desired properties of different materials is transferred to the p-type contact. Here, a mixture of the small molecule Spiro-OMeTAD [2,2’,7,7’-tetrakis(N,N-di-p-methoxyphenylamine)-9,9’-spirobifluoren] and the doped polymer PEDOT [poly(3,4-ethylenedioxythiophene)] is presented as a novel hole selective contact. PEDOT thereby remarkably suppresses charge recombination at the perovskite surface, allowing an increase of quasi-Fermi level splitting in the absorber. Further, the addition of Spiro-OMeTAD into the PEDOT layer is shown to enhance charge extraction at the interface and allow high efficiencies up to 16.8%.
Finally, the knowledge on contact properties is applied to monolithic perovskite-SHJ tandem solar cells. The main goal is to optimize the top contact stack of doped Spiro-OMeTAD/molybdenum oxide(MoOx)/ITO towards higher transparency by two different routes. First, fine-tuning of the ITO deposition to mitigate chemical reduction of MoOx and increase the transmittance of MoOx/ITO stacks by 25%. Second, replacing Spiro-OMeTAD with the alternative hole transport materials PEDOT/Spiro-OMeTAD mixtures, CuSCN or PTAA [poly(triaryl amine)]. Experimental results determine layer thickness constrains and validate optical simulations, which subsequently allow to realistically estimate the respective tandem device performances. As a result, PTAA represents the most promising replacement for Spiro-OMeTAD, with a projected increase of the optimum tandem device efficiency for the herein used architecture by 2.9% relative to 26.5% absolute. The results also reveal general guidelines for further performance gains of the technology.
In this thesis we provide a construction of the operator framework starting from the functional formulation of group field theory (GFT). We define operator algebras on Hilbert spaces whose expectation values in specific states provide correlation functions of the functional formulation. Our construction allows us to give a direct relation between the ingredients of the functional GFT and its operator formulation in a perturbative regime. Using this construction we provide an example of GFT states that can not be formulated as states in a Fock space and lead to math- ematically inequivalent representations of the operator algebra. We show that such inequivalent representations can be grouped together by their symmetry properties and sometimes break the left translation symmetry of the GFT action. We interpret these groups of inequivalent representations as phases of GFT, similar to the classification of phases that we use in QFT’s on space-time.
Die Arktis erwärmt sich schneller als der Rest der Erde. Die Auswirkungen manifestieren sich unter Anderem in einer verstärkten Erwärmung der arktischen Grenzschicht. Diese Arbeit befasst sich mit Wechselwirkungen zwischen synoptischen Zyklonen und der arktischen Atmosphäre auf lokalen bis überregionalen Skalen. Ausgangspunkt dafür sind Messdaten und Modellsimulationen für den Zeitraum der N-ICE2015 Expedition, die von Anfang Januar bis Ende Juni 2015 im arktischen Nordatlantiksektor stattgefunden hat.
Anhand von Radiosondenmessungen lassen sich Auswirkungen von synoptischen Zyklonen am deutlichsten im Winter erkennen, da sie durch die Advektion warmer und feuchter Luftmassen in die Arktis den Zustand der Atmosphäre von einem strahlungs-klaren in einen strahlungs-opaken ändern. Obwohl dieser scharfe Kontrast nur im Winter existiert, zeigt die Analyse, dass der integrierte Wasserdampf als Indikator für die Advektion von Luftmassen aus niedrigen Breiten in die Arktis auch im Frühjahr geeignet ist. Neben der Advektion von Luftmassen wird der Einfluss der Zyklonen auf die statische Stabilität charakterisiert. Beim Vergleich der N-ICE2015 Beobachtungen mit der SHEBA Kampagne (1997/1998), die über dickerem Eis stattfand, finden sich trotz der unterschiedlichen Meereisregime Ähnlichkeiten in der statischen Stabilität der Atmosphäre. Die beobachteten Differenzen in der Stabilität lassen sich auf Unterschiede in der synoptischen Aktivität zurückführen. Dies lässt vermuten, dass die dünnere Eisdecke auf saisonalen Zeitskalen nur einen geringen Einfluss auf die thermodynamische Struktur der arktischen Troposphäre besitzt, solange eine dicke Schneeschicht sie bedeckt. Ein weiterer Vergleich mit den parallel zur N-ICE2015 Kampagne gestarteten Radiosonden der AWIPEV Station in Ny-Åesund, Spitzbergen, macht deutlich, dass die synoptischen Zyklonen oberhalb der Orographie auf saisonalen Zeitskalen das Wettergeschehen bestimmen.
Des Weiteren werden für Februar 2015 die Auswirkungen von in der Vertikalen variiertem Nudging auf die Entwicklung der Zyklonen am Beispiel des hydrostatischen regionalen Klimamodells HIRHAM5 untersucht. Es zeigt sich, dass die Unterschiede zwischen den acht Modellsimulationen mit abnehmender Anzahl der genudgten Level zunehmen. Die größten Differenzen resultieren vornehmlich aus dem zeitlichen Versatz der Entwicklung synoptischer Zyklonen. Zur Korrektur des Zeitversatzes der Zykloneninitiierung genügt es bereits, Nudging in den unterstem 250 m der Troposphäre anzuwenden. Daneben findet sich zwischen den genudgten HIRHAM5-Simulation und den in situ Messungen der gleiche positive Temperaturbias, den auch ERA-Interim besitzt. Das freie HIRHAM hingegen reproduziert das positive Ende der N-ICE2015 Temperaturverteilung gut, besitzt aber einen starken negativen Bias, der sehr wahrscheinlich aus einer Unterschätzung des Feuchtegehalts resultiert. An Beispiel einer Zyklone wird gezeigt, dass Nudging Einfluss auf die Lage der Höhentiefs besitzt, die ihrerseits die Zyklonenentwicklung am Boden beeinflussen. Im Weiteren wird mittels eines für kleine Ensemblegrößen geeigneten Varianzmaßes eine statistische Einschätzung der Wirkung des Nudgings auf die Vertikale getroffen. Es wird festgestellt, dass die Ähnlichkeit der Modellsimulationen in der unteren Troposphäre generell höher ist als darüber und in 500 hPa ein lokales Minimum besitzt.
Im letzten Teil der Analyse wird die Wechselwirkung der oberen und unteren Stratosphäre anhand zuvor betrachteter Zyklonen mit Daten der ERA-Interim Reanalyse untersucht. Lage und Ausrichtung des Polarwirbels erzeugten ab Anfang Februar 2015 eine ungewöhnlich große Meridionalkomponente des Tropopausenjets, die Zugbahnen in die zentrale Arktis begünstigte. Am Beispiel einer Zyklone wird die Übereinstimmung der synoptischen Entwicklung mit den theoretischen Annahmen über den abwärts gerichteten Einfluss der Stratosphäre auf die Troposphäre hervorgehoben. Dabei spielt die nicht-lineare Wechselwirkung zwischen der Orographie Grönlands, einer Intrusion stratosphärischer Luft in die Troposphäre sowie einer in Richtung Arktis propagierender Rossby-Welle eine tragende Rolle. Als Indikator dieser Wechselwirkung werden horizontale Signaturen aus abwechselnd aufsteigender und absinkender Luft innerhalb der Troposphäre identifiziert.
Membrane adhesion is a fundamental biological process in which membranes are attached to neighboring membranes or surfaces. Membrane adhesion emerges from a complex interplay between the binding of membrane-anchored receptors/ligands and the membrane properties. In this work, we study membrane adhesion mediated by lipid-anchored saccharides using microsecond-long full-atomistic molecular dynamics simulations. Motivated by neutron scattering experiments on membrane adhesion via lipid-anchored saccharides, we investigate the role of LeX, Lac1, and Lac2 saccharides and membrane fluctuations in membrane adhesion.
We study the binding of saccharides in three different systems: for saccharides in water, for saccharides anchored to essentially planar membranes at fixed separations, and for saccharides anchored to apposing fluctuating membranes. Our simulations of two saccharides in water indicate that the saccharides engage in weak interactions to form dimers. We find that the binding occurs in a continuum of bound states instead of a certain number of well-defined bound structures, which we term as "diffuse binding".
The binding of saccharides anchored to essentially planar membranes strongly depends on separation of the membranes, which is fixed in our simulation system. We show that the binding constants for trans-interactions of two lipid-anchored saccharides monotonically decrease with increasing separation. Saccharides anchored to the same membrane leaflet engage in cis-interactions with binding constants comparable to the trans-binding constants at the smallest membrane separations. The interplay of cis- and trans-binding can be investigated in simulation systems with many lipid-anchored saccharides. For Lac2, our simulation results indicate a positive cooperativity of trans- and cis-binding. In this cooperative binding the trans-binding constant is enhanced by the cis-interactions. For LeX, in contrast, we observe no cooperativity between trans- and cis-binding. In addition, we determine the forces generated by trans-binding of lipid-anchored saccharides in planar membranes from the binding-induced deviations of the lipid-anchors. We find that the forces acting on trans-bound saccharides increase with increasing membrane separation to values of the order of 10 pN.
The binding of saccharides anchored to the fluctuating membranes results from an interplay between the binding properties of the lipid-anchored saccharides and membrane fluctuations. Our simulations, which have the same average separation of the membranes as obtained from the neutron scattering experiments, yield a binding constant larger than in planar membranes with the same separation. This result demonstrates that membrane fluctuations play an important role at average membrane separations which are seemingly too large for effective binding. We further show that the probability distribution of the local separation can be well approximated by a Gaussian distribution. We calculate the relative membrane roughness and show that our results are in good agreement with the roughness values reported from the neutron scattering experiments.
In the western hemisphere, the piano is one of the most important instruments. While its evolution lasted for more than three centuries, and the most important physical aspects have already been investigated, some parts in the characterization of the piano remain not well understood. Considering the pivotal piano soundboard, the effect of ribs mounted on the board exerted on the sound radiation and propagation in particular, is mostly neglected in the literature. The present investigation deals exactly with the sound wave propagation effects that emerge in the presence of an array of equally-distant mounted ribs at a soundboard. Solid-state theory proposes particular eigenmodes and -frequencies for such arrangements, which are comparable to single units in a crystal. Following this 'linear chain model' (LCM), differences in the frequency spectrum are observable as a distinct band structure. Also, the amplitudes of the modes are changed, due to differences of the damping factor. These scattering effects were not only investigated for a well-understood conceptional rectangular soundboard (multichord), but also for a genuine piano resonance board manufactured by the piano maker company 'C. Bechstein Pianofortefabrik'. To obtain the possibility to distinguish between the characterizing spectra both with and without mounted ribs, the typical assembly plan for the Bechstein instrument was specially customized. Spectral similarities and differences between both boards are found in terms of damping and tone. Furthermore, specially prepared minimal-invasive piezoelectric polymer sensors made from polyvinylidene fluoride (PVDF) were used to record solid-state vibrations of the investigated system. The essential calibration and characterization of these polymer sensors was performed by determining the electromechanical conversion, which is represented by the piezoelectric coefficient. Therefore, the robust 'sinusoidally varying external force' method was applied, where a dynamic force perpendicular to the sensor's surface, generates movable charge carriers. Crucial parameters were monitored, with the frequency response function as the most important one for acousticians. Along with conventional condenser microphones, the sound was measured as solid-state vibration as well as airborne wave. On this basis, statements can be made about emergence, propagation, and also the overall radiation of the generated modes of the vibrating system. Ultimately, these results acoustically characterize the entire system.
The main goal of this cumulative thesis is the derivation of surface emissivity data in the infrared from radiance measurements of Venus. Since these data are diagnostic of the chemical composition and grain size of the surface material, they can help to improve knowledge of the planet’s geology. Spectrally resolved images of nightside emissions in the range 1.0-5.1 μm were recently acquired by the InfraRed Mapping channel of the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS-M-IR) aboard ESA’s Venus EXpress (VEX). Surface and deep atmospheric thermal emissions in this spectral range are strongly obscured by the extremely opaque atmosphere, but three narrow spectral windows at 1.02, 1.10, and 1.18 μm allow the sounding of the surface. Additional windows between 1.3 and 2.6 μm provide information on atmospheric parameters that is required to interpret the surface signals. Quantitative data on surface and atmosphere can be retrieved from the measured spectra by comparing them to simulated spectra. A numerical radiative transfer model is used in this work to simulate the observable radiation as a function of atmospheric, surface, and instrumental parameters. It is a line-by-line model taking into account thermal emissions by surface and atmosphere as well as absorption and multiple scattering by gases and clouds. The VIRTIS-M-IR measurements are first preprocessed to obtain an optimal data basis for the subsequent steps. In this process, a detailed detector responsivity analysis enables the optimization of the data consistency. The measurement data have a relatively low spectral information content, and different parameter vectors can describe the same measured spectrum equally well. A usual method to regularize the retrieval of the wanted parameters from a measured spectrum is to take into account a priori mean values and standard deviations of the parameters to be retrieved. This decreases the probability to obtain unreasonable parameter values. The multi-spectrum retrieval algorithm MSR is developed to additionally consider physically realistic spatial and temporal a priori correlations between retrieval parameters describing different measurements. Neglecting geologic activity, MSR also allows the retrieval of an emissivity map as a parameter vector that is common to several spectrally resolved images that cover the same surface target. Even applying MSR, it is difficult to obtain reliable emissivity maps in absolute values. A detailed retrieval error analysis based on synthetic spectra reveals that this is mainly due to interferences from parameters that cannot be derived from the spectra themselves, but that have to be set to assumed values to enable the radiative transfer simulations. The MSR retrieval of emissivity maps relative to a fixed emissivity is shown to effectively avoid most emissivity retrieval errors. Relative emissivity maps at 1.02, 1.10, and 1.18 μm are finally derived from many VIRTIS-M-IR measurements that cover a surface target at Themis Regio. They are interpreted as spatial variations relative to an assumed emissivity mean of the target. It is verified that the maps are largely independent of the choice of many interfering parameters as well as the utilized measurement data set. These are the first Venus IR emissivity data maps based on a consistent application of a full radiative transfer simulation and a retrieval algorithm that respects a priori information. The maps are sufficiently reliable for future geologic interpretations.
The primary objective of this work was to develop a laser source for fundamental investigations in the field of laser – materials interactions. In particular it is supposed to facilitate the study of the influence of the temporal energy distribution such as the interaction between adjacent pulses on ablation processes. Therefore, the aim was to design a laser with a highly flexible and easily controllable temporal energy distribution. The laser to meet these demands is an SBS-laser with optional active mode-locking. The nonlinear reflectivity of the SBS-mirror leads to a passive Q-switching and issues ns-pulse bursts with µs spacing. The pulse train parameters such as pulse duration, pulse spacing, pulse energy and number of pulses within a burst can be individually adjusted by tuning the pump parameters and the starting conditions for the laser. Another feature of the SBS-reflection is phase conjugation, which leads to an excellent beam quality thanks to the compensation of phase distortions. Transverse fundamental mode operation and a beam quality better than 1.4 times diffraction limited can be maintained for average output powers of up to 10 W. In addition to the dynamics on a ns-timescale described above, a defined splitting up of each ns-pulse into a train of ps-pulses can be achieved by additional active mode-locking. This twofold temporal focussing of the intensity leads to single pulse energies of up to 2 mJ at pulse durations of approximately 400 ps which corresponds to a pulse peak power of 5 MW. While the pulse duration is of the same order of magnitude as those of other passively Q-switched lasers with simultaneous mode-locking, the pulse energy and pulse peak power exceeds the values of these systems found in the literature by an order of magnitude. To the best of my knowledge the laser presented here is the first implementation of a self-starting mode-locked SBS-laser oscillator. In order to gain a better understanding and control of the transient output of the laser two complementary numerical models were developed. The first is based on laser rate equations which are solved for each laser mode individually while the mode-locking dynamics are calculated from the resultant transient spectrum. The rate equations consider the mean photon densities in the resonator, therefore the propagation of the light inside the resonator is not properly displayed. The second model, in contrast, introduces a spatial resolution of the resonator and hence the propagation inside the resonator can more accurately be considered. Consequently, a mismatch between the loss modulation frequency and the resonator round trip time can be conceived. The model calculates all dynamics in the time domain and therefore the spectral influences such as the Stokes-shift have to be neglected. Both models achieve an excellent reproduction of the ns-dynamics that are generated by the SBS-Q-switch. Separately, each model fails to reproduce all aspects of the ps-dynamics of the SBS-laser in detail. This can be attributed to the complexity of the numerous physical processes involved in this system. But thanks to their complementary nature they provide a very useful tool for investigating the various influences on the dynamics of the mode-locked SBS-laser individually. These aspects can eventually be recomposed to give a complete picture of the mechanisms which govern the output dynamics. Among the aspects under scrutiny were in particular the start resonator quality which determines the starting condition for the SBS-Q-switch, the modulation depth of the AOM and the phonon lifetime as well as the Brillouin-frequency of the SBS-medium. The numerical simulations and the experiments have opened several doors inviting further investigations and promising a potential for further improvement of the experimental results: The results of the simulations in combination with the experimental results which determined the starting conditions for the simulations leave no doubt that the bandwidth generation can primarily be attributed to the SBS-Stokes-shift during the buildup of the Q-switch pulse. For each resonator round trip, bandwidth is generated by shifting a part of the revolving light in frequency. The magnitude of the frequency shift corresponds to the Brillouin-frequency which is a constant of the SBS material and amounts in the case of SF6 to 240 MHz. The modulation of the AOM merely provides an exchange of population between spectrally adjacent modes and therefore diminishes a modulation in the spectrum. By use of a material with a Brillouin-frequency in the GHz range the bandwidth generation can be considerably accelerated thereby shortening the pulse duration. Also, it was demonstrated that yet another nonlinear effect of the SBS can be exploited: If the phonon lifetime is short compared to the resonator round trip time we obtain a modulation in the SBS-reflectivity that supports the modulation of the AOM. The application of an external optical feedback by a conventional mirror turns out to be an alternative to the AOM in synchronizing the longitudinal resonator modes. The interesting feature about this system is that it is ― although highly complex in the physical processes and the temporal output dynamics ― very simple and inexpensive from a technical point of view. No expensive modulators and no control electronics are necessary. Finally, the numerical models constitute a powerful tool for the investigation of emission dynamics of complex laser systems on arbitrary timescales and can also display the spectral evolution of the laser output. In particular it could be demonstrated that differences in the results of the complementary models vanish for systems of lesser complexity.
Crowded field spectroscopy and the search for intermediate-mass black holes in globular clusters
(2013)
Globular clusters are dense and massive star clusters that are an integral part of any major galaxy. Careful studies of their stars, a single cluster may contain several millions of them, have revealed that the ages of many globular clusters are comparable to the age of the Universe. These remarkable ages make them valuable probes for the exploration of structure formation in the early universe or the assembly of our own galaxy, the Milky Way. A topic of current research relates to the question whether globular clusters harbour massive black holes in their centres. These black holes would bridge the gap from stellar mass black holes, that represent the final stage in the evolution of massive stars, to supermassive ones that reside in the centres of galaxies. For this reason, they are referred to as intermediate-mass black holes. The most reliable method to detect and to weigh a black hole is to study the motion of stars inside its sphere of influence. The measurement of Doppler shifts via spectroscopy allows one to carry out such dynamical studies. However, spectroscopic observations in dense stellar fields such as Galactic globular clusters are challenging. As a consequence of diffraction processes in the atmosphere and the finite resolution of a telescope, observed stars have a finite width characterized by the point spread function (PSF), hence they appear blended in crowded stellar fields. Classical spectroscopy does not preserve any spatial information, therefore it is impossible to separate the spectra of blended stars and to measure their velocities. Yet methods have been developed to perform imaging spectroscopy. One of those methods is integral field spectroscopy. In the course of this work, the first systematic study on the potential of integral field spectroscopy in the analysis of dense stellar fields is carried out. To this aim, a method is developed to reconstruct the PSF from the observed data and to use this information to extract the stellar spectra. Based on dedicated simulations, predictions are made on the number of stellar spectra that can be extracted from a given data set and the quality of those spectra. Furthermore, the influence of uncertainties in the recovered PSF on the extracted spectra are quantified. The results clearly show that compared to traditional approaches, this method makes a significantly larger number of stars accessible to a spectroscopic analysis. This systematic study goes hand in hand with the development of a software package to automatize the individual steps of the data analysis. It is applied to data of three Galactic globular clusters, M3, M13, and M92. The data have been observed with the PMAS integral field spectrograph at the Calar Alto observatory with the aim to constrain the presence of intermediate-mass black holes in the centres of the clusters. The application of the new analysis method yields samples of about 80 stars per cluster. These are by far the largest spectroscopic samples that have so far been obtained in the centre of any of the three clusters. In the course of the further analysis, Jeans models are calculated for each cluster that predict the velocity dispersion based on an assumed mass distribution inside the cluster. The comparison to the observed velocities of the stars shows that in none of the three clusters, a massive black hole is required to explain the observed kinematics. Instead, the observations rule out any black hole in M13 with a mass higher than 13000 solar masses at the 99.7% level. For the other two clusters, this limit is at significantly lower masses, namely 2500 solar masses in M3 and 2000 solar masses in M92. In M92, it is possible to lower this limit even further by a combined analysis of the extracted stars and the unresolved stellar component. This component consists of the numerous stars in the cluster that appear unresolved in the integral field data. The final limit of 1300 solar masses is the lowest limit obtained so far for a massive globular cluster.
Organic thin film transistors (TFT) are an attractive option for low cost electronic applications and may be used for active matrix displays and for RFID applications. To extend the range of applications there is a need to develop and optimise the performance of non-volatile memory devices that are compatible with the solution-processing fabrication procedures used in plastic electronics. A possible candidate is an organic TFT incorporating the ferroelectric co-polymer poly(vinylidenefluoride-trifluoroethylene)(P(VDF-TrFE)) as the gate insulator. Dielectric measurements have been carried out on all-organic metal-insulator-semiconductor structures with the ferroelectric polymer poly(vinylidenefluoride-trifluoroethylene) (P(VDF-TrFE)) as the gate insu-lator. The capacitance spectra of MIS devices, were measured under different biases, showing the effect of charge accumulation and depletion on the Maxwell-Wagner peak. The position and height of this peak clearly indicates the lack of stable depletion behavior and the decrease of mobility when increasing the depletion zone width, i.e. upon moving into the P3HT bulk. The lack of stable depletion was further investigated with capacitance-voltage (C-V) measurements. When the structure was driven into depletion, C-V plots showed a positive flat-band voltage shift, arising from the change in polarization state of the ferroelectric insulator. When biased into accumulation, the polarization was reversed. It is shown that the two polarization states are stable i.e. no depolarization occurs below the coercive field. However, negative charge trapped at the semiconductor-insulator interface during the depletion cycle masks the negative shift in flat-band voltage expected during the sweep to accumulation voltages. The measured output characteristics of the studied ferroelectric-field-effect transistors confirmed the results of the C-V plots. Furthermore, the results indicated a trapping of electrons at the positively charged surfaces of the ferroelectrically polarized P(VDF-TrFE) crystallites near the insulator/semiconductor in-terface during the first poling cycles. The study of the MIS structure by means of thermally stimulated current (TSC) revealed further evidence for the stability of the polarization under depletion voltages. It was shown, that the lack of stable depletion behavior is caused by the compensation of the orientational polarization by fixed electrons at the interface and not by the depolarization of the insulator, as proposed in several publications. The above results suggest a performance improvement of non-volatile memory devices by the optimization of the interface.
Eine Nutzung der optischen Anisotropie dünner Schichten ist vor allem für die Displaytechnologie, die optische Datenspeicherung und für optische Sicherheitselemente von hoher Bedeutung. Diese Doktorarbeit befasst sich mit theoretischen und experimentellen Untersuchung von dreidimensionaler Anisotropie und dabei insbesondere mit der Untersuchung von lichtinduzierter dreidimensionaler Anisotropie in organischen dünnen Polymer-Schichten. Die gewonnenen Erkentnisse und entwickelten Methoden können wertvolle Beiträge für Optimierungsprozesse, wie bei der Kompensation der Blickwinkelabhängigkeit von Flüssigkristall-Displays, liefern. Die neue Methode der Immersions-Transmissions-Ellipsometrie (ITE) zur Untersuchung von dünneren Schichten wurde im Rahmen dieser Dissertation entwickelt. Diese Methode gestattet es, in Kombination mit konventioneller Reflexions- und Transmissionsellipsometrie, die absoluten dreidimensionalen Brechungsindices einer biaxialen Schicht zu bestimmen. Erstmals gelang es damit, das dreidimensionale Brechungsindexellipsoid von transparenten, dünneren (150 nm) Filmen hochgenau (drei Stellen hinter dem Komma) zu bestimmen. Die ITE-Methode hat demzufolge das Potential, auch bei noch dünneren Schichten mit Gewinn eingesetzt werden zu können. Die lichtinduzierte Generierung von dreidimensionaler Anisotropie wurde in dünnen Schichten von azobenzenhaltigen und zimtsäurehaltigen, amorphen und flüssig-kristallinen Homo- und Copolymeren untersucht. Erstmals wurden quantitative Untersuchungen zur Änderung von lichtinduzierten, dreidimensionalen Anisotropien in dünnen Schichten von azobenzenhaltigen und zimtsäurehaltigen Polymeren bei Tempern oberhalb der Glastemperatur durchgeführt. Bei vielen der untersuchten Polymere war die dreidimensionale Ordnung nach dem Bestrahlen mit polarisiertem Licht und anschließendem Tempern oberhalb der Glastemperatur scheinbar von der Schichtdicke abhängig. Die Ursache liegt wohl in der, mit der neuentwickelten ITE-Methode detektierten, planaren Ausgangsorientierung der aufgeschleuderten dünneren Schichten. Um Verkippungs-Gradienten in dickeren Polymerschichten in ihrem Verlauf zu bestimmen, wurde eine spezielle Methode unter Benutzung der Wellenleitermoden-Spektroskopie entwickelt. Quantenchemisch bestimmte, maximal induzierbare Doppelbrechungen in flüssig-kristallinen Polymeren wurden mit den experimentell gefundenen Ordnungen verglichen.
Active Galactic Nuclei (AGN) are considered to be the main powering source of active galaxies, where central Super Massive Black Holes (SMBHs), with masses between 106 and 109 M⊙ gravitationally pull the surrounding material via accre- tion. AGN phenomenon expands over a very wide range of luminosities, from the most luminous high-redshift quasars (QSOs), to the local Low-Luminosity AGN (LLAGN), with significantly weaker luminosities. While "typical" luminous AGNs distinguish themselves by their characteristical blue featureless continuum, the Broad Emission Lines (BELs) with Full Widths at Half Maximum (FWHM) in order of few thousands km s1, arising from the so-called Broad Line Region (BLR), and strong radio and/or X-ray emission, detection of LLAGNs on the other hand is quite chal- lenging due to their extremely weak emission lines, and absence of the power-law continuum. In order to fully understand AGN evolution and their duty-cycles across cosmic history, we need a proper knowledge of AGN phenomenon at all luminosi- ties and redshifts, as well as perspectives from different wavelength bands.
In this thesis I present a search for AGN signatures in central spectra of 542 local (0.005 < z < 0.03) galaxies from the Calar Alto Legacy Integral Field Area (CALIFA) survey. The adopted aperture of 3′′ × 3′′ corresponds to central ∼ 100 − 500 pc for the redshift range of CALIFA. Using the standard emission-line ratio diagnostic diagrams, we initially classified all CALIFA emission-line galaxies (526) into star- forming, LINER-like, Seyfert 2 and intermediates. We further detected signatures of the broad Hα component in 89 spectra from the sample, of which more than 60% are present in the central spectra of LINER-like galaxies. These BELs are very weak, with luminosities in range 1038 − 1041 erg s−1, but with FWHMs between 1000 km s−1 and 6000 km s−1, comparable to those of luminous high-z AGN. This result implies that type 1 AGN are in fact quite frequent in the local Universe. We also identified additional 29 Seyfert 2 galaxies using the emission-line ratio diagnostic diagrams.
Using the MBH − σ∗ correlation, we estimated black hole masses of 55 type 1 AGN from CALIFA, a sample for which we had estimates of bulge stellar velocity dispersions σ∗. We compared these masses to the ones that we estimated from the virial method and found large discrepancies. We analyzed the validity of both meth- ods for black hole mass estimation of local LLAGN, and concluded that most likely virial scaling relations can no longer be applied as a valid MBH estimator in such low-luminosity regime. These black holes accrete at very low rate, having Edding- ton ratios in range 4.1 × 10−5 − 2.4 × 10−3. Detection of BELs with such low lumi- nosities and at such low Eddington rates implies that these LLAGN are still able to form the BLR, although with probably modified structure of the central engine.
In order to obtain full picture of black hole growth across cosmic time, it is es- sential that we study them in different stages of their activity. For that purpose, we estimated the broad AGN Luminosity Function (AGNLF) of our entire type 1 AGN sample using the 1/Vmax method. The shape of AGNLF indicates an apparent flattening below luminosities LHα ∼ 1039 erg s−1. Correspondingly we estimated ac- tive Black Hole Mass Function (BHMF) and Eddington Ration Distribution Function (ERDF) for a sub-sample of type 1 AGN for which we have MBH and λ estimates. The flattening is also present in both BHMF and ERDF, around log(MBH) ∼ 7.7 and log(λ) < 3, respectively. We estimated the fraction of active SMBHs in CALIFA by comparing our active BHMF to the one of the local quiescent SMBHs. The shape of
the active fraction which decreases with increasing MBH, as well as the flattening of AGNLF, BHMF and ERDF is consistent with scenario of AGN cosmic downsizing.
To complete AGN census in the CALIFA galaxy sample, it is necessary to search for them in various wavelength bands. For the purpose of completing the census we performed cross-correlations between all 542 CALIFA galaxies and multiwavelength surveys, Swift – BAT 105 month catalogue (in hard 15 - 195 keV X-ray band), and NRAO VLA Sky Survey (NVSS, in 1.4 GHz radio domain). This added 1 new AGN candidate in X-ray, and 7 in radio wavelength band to our local LLAGN count.
It is possible to detect AGN emission signatures within 10 – 20 kpc outside of the central galactic regions. This may happen when the central AGN has recently switched off and the photoionized material is spread across the galaxy within the light-travel-time, or the photoionized material is blown away from the nucleus by outflows. In order to detect these extended AGN regions we constructed spatially resolved emission-line ratio diagnostic diagrams of all emission-line galaxies from the CALIFA, and found 1 new object that was previously not identified as AGN.
Obtaining the complete AGN census in CALIFA, with five different AGN types, showed that LLAGN contribute a significant fraction of 24% of the emission-line galaxies in the CALIFA sample. This result implies that AGN are quite common in the local Universe, and although being in very low activity stage, they contribute to large fraction of all local SMBHs. Within this thesis we approached the upper limit of AGN fraction in the local Universe and gained some deeper understanding of the LLAGN phenomenon.
This thesis is focused on a better understanding of the formation mechanism of bulk birefringence gratings (BBG) and a surface relief gratings (SRG) in photo-sensitive polymer films. A new set-up is developed enabling the in situ investigation how the polymer film is being structured during irradiation with modulated light. The new aspect of the equipment is that it combines several techniques such as a diffraction efficiency (DE) set-up, an atomic force microscope (AFM) and an optical set-up for controlled illumination of the sample. This enables the simultaneous acquiring and differentiation of both gratings (BBG and SRG), while changing the irradiation conditions in desired way.
The dissertation is based on five publications. The first publication (I) is focused on the description of the set-up and interpretation of the measured data. A fine structure within the 1st-order diffraction spot is observed, which is a result of the inhomogeneity of the inscribed gratings.
In the second publication (II) the interplay of BBG and SRG in the DE is discussed. It has been found, that, dependent on the polarization of a weak probe beam, the diffraction components of the SRG and BBG either interfere constructively or destructively in the DE, altering the appearance of the intensity distribution within the diffracted spot.
The third (III) and fourth (IV) publications describe the light-induced reconfiguration of surface structures. Special attention is payed to conditions influencing the erasure of topography and bulk gratings. This can be achieved via thermal treatment or illumination of the polymer film. Using the translation of the interference pattern (IP) in a controlled way, the optical erase speed is significantly increased. Additionally, a dynamic reconfigurable surface is generated, which could move surface attached objects by the continuous translation of the interference pattern during irradiation of the polymer films.
The fifth publication (V) deals with the understanding of polymer deformation under irradiation with SP-IP, which is the only IP generating a half-period topography grating (compared to the period of the IP) on the photo-sensitive polymer film. This mechanism is used, e.g. to generate a SRG below the diffraction limit of light. It also represents an easy way of changing the period of the surface grating just by a small change in polarization angle of the interfering beams without adjusting the optical pass of the two beams. Additionally, complex surface gratings formed in mixed polarization- and intensity interference patterns are shown.
I J. Jelken, C. Henkel and S. Santer, Applied Physics B, 125 (2019), 218
II J. Jelken, C. Henkel and S. Santer, Appl. Phys. Lett., 116 (2020), 051601
III J. Jelken and S. Santer, RSC Advances, 9 (2019), 20295
IV J. Jelken, M. Brinkjans, C. Henkel and S. Santer, SPIE Proceedings, 11367 (2020), 1136710
V J. Jelken, C. Henkel and S. Santer, Formation of Half-Period Surface Relief Gratings in Azobenzene Containing Polymer Films (submitted to Applied Physics B)
A huge number of applications require coherent radiation in the visible spectral range. Since diode lasers are very compact and efficient light sources, there exists a great interest to cover these applications with diode laser emission. Despite modern band gap engineering not all wavelengths can be accessed with diode laser radiation. Especially in the visible spectral range between 480 nm and 630 nm no emission from diode lasers is available, yet. Nonlinear frequency conversion of near-infrared radiation is a common way to generate coherent emission in the visible spectral range. However, radiation with extraordinary spatial temporal and spectral quality is required to pump frequency conversion. Broad area (BA) diode lasers are reliable high power light sources in the near-infrared spectral range. They belong to the most efficient coherent light sources with electro-optical efficiencies of more than 70%. Standard BA lasers are not suitable as pump lasers for frequency conversion because of their poor beam quality and spectral properties. For this purpose, tapered lasers and diode lasers with Bragg gratings are utilized. However, these new diode laser structures demand for additional manufacturing and assembling steps that makes their processing challenging and expensive. An alternative to BA diode lasers is the stripe-array architecture. The emitting area of a stripe-array diode laser is comparable to a BA device and the manufacturing of these arrays requires only one additional process step. Such a stripe-array consists of several narrow striped emitters realized with close proximity. Due to the overlap of the fields of neighboring emitters or the presence of leaky waves, a strong coupling between the emitters exists. As a consequence, the emission of such an array is characterized by a so called supermode. However, for the free running stripe-array mode competition between several supermodes occurs because of the lack of wavelength stabilization. This leads to power fluctuations, spectral instabilities and poor beam quality. Thus, it was necessary to study the emission properties of those stripe-arrays to find new concepts to realize an external synchronization of the emitters. The aim was to achieve stable longitudinal and transversal single mode operation with high output powers giving a brightness sufficient for efficient nonlinear frequency conversion. For this purpose a comprehensive analysis of the stripe-array devices was done here. The physical effects that are the origin of the emission characteristics were investigated theoretically and experimentally. In this context numerical models could be verified and extended. A good agreement between simulation and experiment was observed. One way to stabilize a specific supermode of an array is to operate it in an external cavity. Based on mathematical simulations and experimental work, it was possible to design novel external cavities to select a specific supermode and stabilize all emitters of the array at the same wavelength. This resulted in stable emission with 1 W output power, a narrow bandwidth in the range of 2 MHz and a very good beam quality with M²<1.5. This is a new level of brightness and brilliance compared to other BA and stripe-array diode laser systems. The emission from this external cavity diode laser (ECDL) satisfied the requirements for nonlinear frequency conversion. Furthermore, a huge improvement to existing concepts was made. In the next step newly available periodically poled crystals were used for second harmonic generation (SHG) in single pass setups. With the stripe-array ECDL as pump source, more than 140 mW of coherent radiation at 488 nm could be generated with a very high opto-optical conversion efficiency. The generated blue light had very good transversal and longitudinal properties and could be used to generate biphotons by parametric down-conversion. This was feasible because of the improvement made with the infrared stripe-array diode lasers due to the development of new physical concepts.
Transport processes in and of cells are of major importance for the survival of the organism. Muscles have to be able to contract, chromosomes have to be moved to opposing ends of the cell during mitosis, and organelles, which are compartments enclosed by membranes, have to be transported along molecular tracks. Molecular motors are proteins whose main task is moving other molecules.For that purpose they transform the chemical energy released in the hydrolysis of ATP into mechanical work. The motors of the cytoskeleton belong to the three super families myosin, kinesin and dynein. Their tracks are filaments of the cytoskeleton, namely actin and the microtubuli. Here, we examine stochastic models which are used for describing the movements of these linear molecular motors. The scale of the movements comprises the regime of single steps of a motor protein up to the directed walk along a filament. A single step bridges around 10 nm, depending on the protein, and takes about 10 ms, if there is enough ATP available. Our models comprise M states or conformations the motor can attain during its movement along a one-dimensional track. At K locations along the track transitions between the states are possible. The velocity of the protein depending on the transition rates between the single states can be determined analytically. We calculate this velocity for systems of up to four states and locations and are able to derive a number of rules which are helpful in estimating the behaviour of an arbitrary given system. Beyond that we have a look at decoupled subsystems, i.e., one or a couple of states which have no connection to the remaining system. With a certain probability a motor undergoes a cycle of conformational changes, with another probability an independent other cycle. Active elements in real transport processes by molecular motors will not be limited to the transitions between the states. In distorted networks or starting from the discrete Master equation of the system, it is possible to specify horizontal rates, too, which furthermore no longer have to fulfill the conditions of detailed balance. Doing so, we obtain unique, complete paths through the respective network and rules for the dependence of the total current on all the rates of the system. Besides, we view the time evolutions for given initial distributions. In enzymatic reactions there is the idea of a main pathway these reactions follow preferably. We determine optimal paths and the maximal flow for given networks. In order to specify the dependence of the motor's velocity on its fuel ATP, we have a look at possible reaction kinetics determining the connection between unbalanced transitions rates and ATP-concentration. Depending on the type of reaction kinetics and the number of unbalanced rates, we obtain qualitatively different curves connecting the velocity to the ATP-concentration. The molecular interaction potentials the motor experiences on its way along its track are unknown. We compare different simple potentials and the effects the localization of the vertical rates in the network model has on the transport coefficients in comparison to other models.
We investigate the rotational and thermal properties of star-forming molecular clouds using hydrodynamic simulations. Stars form from molecular cloud cores by gravoturbulent fragmentation. Understanding the angular momentum and the thermal evolution of cloud cores thus plays a fundamental role in completing the theoretical picture of star formation. This is true not only for current star formation as observed in regions like the Orion nebula or the ρ-Ophiuchi molecular cloud but also for the formation of stars of the first or second generation in the universe. In this thesis we show how the angular momentum of prestellar and protostellar cores evolves and compare our results with observed quantities. The specific angular momentum of prestellar cores in our models agree remarkably well with observations of cloud cores. Some prestellar cores go into collapse to build up stars and stellar systems. The resulting protostellar objects have specific angular momenta that fall into the range of observed binaries. We find that collapse induced by gravoturbulent fragmentation is accompanied by a substantial loss of specific angular momentum. This eases the "angular momentum problem" in star formation even in the absence of magnetic fields. The distribution of stellar masses at birth (the initial mass function, IMF) is another aspect that any theory of star formation must explain. We focus on the influence of the thermodynamic properties of star-forming gas and address this issue by studying the effects of a piecewise polytropic equation of state on the formation of stellar clusters. We increase the polytropic exponent γ from a value below unity to a value above unity at a certain critical density. The change of the thermodynamic state at the critical density selects a characteristic mass scale for fragmentation, which we relate to the peak of the IMF observed in the solar neighborhood. Our investigation generally supports the idea that the distribution of stellar masses depends mainly on the thermodynamic state of the gas. A common assumption is that the chemical evolution of the star-forming gas can be decoupled from its dynamical evolution, with the former never affecting the latter. Although justified in some circumstances, this assumption is not true in every case. In particular, in low-metallicity gas the timescales for reaching the chemical equilibrium are comparable or larger than the dynamical timescales. In this thesis we take a first approach to combine a chemical network with a hydrodynamical code in order to study the influence of low levels of metal enrichment on the cooling and collapse of ionized gas in small protogalactic halos. Our initial conditions represent protogalaxies forming within a fossil HII region -- a previously ionized HII region which has not yet had time to cool and recombine. We show that in these regions, H2 is the dominant and most effective coolant, and that it is the amount of H2 formed that controls whether or not the gas can collapse and form stars. For metallicities Z <= 10<sup>-3 Zsun, metal line cooling alters the density and temperature evolution of the gas by less than 1% compared to the metal-free case at densities below 1 cm<sup>-3 and temperatures above 2000 K. We also find that an external ultraviolet background delays or suppresses the cooling and collapse of the gas regardless of whether it is metal-enriched or not. Finally, we study the dependence of this process on redshift and mass of the dark matter halo.
In dieser Arbeit werden Konzepte für die Diagnostik der großskaligen Zirkulation in der Troposphäre und Stratosphäre entwickelt. Der Fokus liegt dabei auf dem Energiehaushalt, auf der Wellenausbreitung und auf der Interaktion der atmosphärischen Wellen mit dem Grundstrom. Die Konzepte werden hergeleitet, wobei eine neue Form des lokalen Eliassen-Palm-Flusses unter Einbeziehung der Feuchte eingeführt wird. Angewendet wird die Diagnostik dann auf den Reanalysedatensatz ERA-Interim und einen durch beobachtete Meerestemperatur- und Eisdaten angetriebenen Lauf des ECHAM6 Atmosphärenmodells. Die diagnostischen Werkzeuge zur Analyse der großskaligen Zirkulation sind einerseits nützlich, um das Verständnis der Dynamik des Klimasystems weiter zu fördern. Andererseits kann das gewonnene Verständnis des Zusammenhangs von Energiequellen und -senken sowie deren Verknüpfung mit synoptischen und planetaren Wellensystemen und dem resultierenden Antrieb des Grundstroms auch verwendet werden, um Klimamodelle auf die korrekte Wiedergabe dieser Beobachtungen zu prüfen. Hier zeigt sich, dass die Abweichungen im untersuchten ECHAM6-Modelllauf bezüglich des Energiehaushalts klein sind, jedoch teils starke Abweichungen bezüglich der Ausbreitung von atmosphärischen Wellen existieren. Planetare Wellen zeigen allgemein zu große Intensitäten in den Eliassen-Palm-Flüssen, während innerhalb der Strahlströme der oberen Troposphäre der Antrieb des Grundstroms durch synoptische Wellen verfälscht ist, da deren vertikale Ausbreitung gegenüber den Beobachtungen verschoben ist. Untersucht wird auch der Einfluss von arktischen Meereisänderungen ausgehend vom Bedeckungsminimum im August/September bis in den Winter. Es werden starke positive Temperaturanomalien festgestellt, welche an der Oberfläche am größten sind. Diese führen vor allem im Herbst zur Intensivierung von synoptischen Systemen in den arktischen Breiten, da die Stabilität der troposphärischen Schichtung verringert ist. Im darauffolgenden Winter stellen sich barotrope bis in die Stratosphäre reichende Änderungen der großskaligen Zirkulation ein, welche auf Meereisänderungen zurückzuführen sind. Der meridionale Druckgradient sinkt und führt so zu einem Muster ähnlich einer negativen Phase der arktischen Oszillation in der Troposphäre und einem geschwächten Polarwirbel in der Stratosphäre. Diese Zusammenhänge werden ebenfalls in einem ECHAM6-Modelllauf untersucht, wobei vor allem der Erwärmungstrend in der Arktis zu gering ist. Die großskaligen Veränderungen im Winter können zum Teil auch im Modelllauf festgestellt werden, jedoch zeigen sich insbesondere in der Stratosphäre Abweichungen für die Periode mit der geringsten Eisausdehnung. Die vertikale Ausbreitung planetarer Wellen von der Troposphäre in die Stratosphäre ist in ECHAM6 mit sehr großen Abweichungen wiedergegeben. Somit stellt die Wellenausbreitung insgesamt den größten in dieser Arbeit festgestellten Mangel in ECHAM6 dar.
In Leuchtdioden wird Licht durch die Rekombination von injizierten Ladungsträgern erzeugt. Das kann einerseits in anorganischen Materialien geschehen. In diesem Fall ist es notwendig, hochgeordnete Kristallstrukturen herzustellen, die die Eigenschaften der Leuchtdioden bestimmen. Ein anderer Ansatz ist die Verwendung von organischen Molekülen und Polymeren. Auf Grund der Vielseitigkeit der organischen Chemie können die Eigenschaften der verwendeten halbleitenden Polymere schon während der Synthese beeinflusst werden. Außerdem weisen auch diese Polymere die bekannte mechanische Flexibilität auf. Die Herstellung von flexiblen, großflächigen Beleuchtungsquellen und Anzeigelementen ist so möglich. Die erste Leuchtdiode mit einem halbleitenden Polymer als Emitter wurde 1990 hergestellt. Seither hat das Forschungsgebiet eine rasante Entwicklung genommen. Auch erste kommerzielle Produkte sind erhältlich. Im Zuge dieser Entwicklung wurde deutlich, dass die Eigenschaften von polymeren Leuchtdioden – beispielsweise Farbe und Effizienz – durch die Verwendung mehrerer Komponenten in der aktiven Schicht deutlich verbessert werden können. Gleichzeitig ergeben sich neue Herausforderungen durch die Wechselwirkungen der verschiedenen Filmbestandteile. Während die Komponenten oft entweder zur Verbesserung des Ladungstransportes oder zur Beeinflussung der Emission zugegeben werden, muss darauf geachtet werden, dass die anderen Prozesse nicht negativ beeinflusst werden. In dieser Arbeit werden einige dieser Wechselwirkungen untersucht und mit einfachen physikalischen Modellen erklärt. So werden zunächst blau emittierende Leuchtdioden auf der Basis von Polyfluoren untersucht. Dieses Material ist zwar ein sehr effizienter blauer Emitter, jedoch ist es anfällig für chemische Defekte, diese sich nicht vollständig verhindern lassen. Die Defekte bilden Fallenzustände für Elektronen, ihr Einfluss lässt sich durch die Zugabe von Lochfallen unterdrücken. Der zugrunde liegende Prozess, die Beeinflussung der Ladungsträgerbalance, wird erklärt. Im Folgenden werden Mischsystemen mit dendronisierten Emittern, die gleichzeitig eine Falle für Elektronen bilden, untersucht. Hier wird die unterschiedliche Wirkung der isolierenden Hülle auf die Ladungs- und Energieübertragung zwischen Matrix und Farbstoffkern der Dendrimere untersucht. In Mischsystemen haben die Natur der angeregten Zustände sowie die Art und Weise des Ladungsträgertransportes einen großen Einfluss auf diese Transferprozesse. Außerden hat auch hier die Ladungsträgerbalance Auswirkungen auf die Emission. Um den Ladungsträgereinfang in Fallenzuständen zu charakterisieren, wird eine Methode auf Grundlage der Messung des zeitaufgelösten Photostroms in organischen Mischfilmen weiterentwickelt. Die erzielten Ergebnisse zeigen, dass die Übertragung der für geordnete Systeme entwickelten Modelle des Ladungsträgertransportes nicht ohne weiteres auf Polymersysteme mit hoher Unordnung übertragen werden können. Abschließend werden zeitaufgelöste Messungen der Phosphoreszenz in entsprechenden Mischungen aus Polymeren und organometallischen Verbindungen vorgestellt. Auch diese Systeme enthalten üblicherweise weitere Komponenten, die den Ladungstransport verbessern. In diesen Filmen kann es zu einer Übertragung der Tripletts vom Emitter auf die weiteren Filmbestandteile kommen. Bei Kenntnis der in Frage kommenden Wechselwirkungen können die unerwünschten Prozesse vermieden werden.
LCST-type synthetic thermoresponsive polymers can reversibly respond to certain stimuli in aqueous media with a massive change of their physical state. When fluorophores, that are sensitive to such changes, are incorporated into the polymeric structure, the response can be translated into a fluorescence signal. Based on this idea, this thesis presents sensing schemes which transduce the stimuli-induced variations in the solubility of polymer chains with covalently-bound fluorophores into a well-detectable fluorescence output. Benefiting from the principles of different photophysical phenomena, i.e. of fluorescence resonance energy transfer and solvatochromism, such fluorescent copolymers enabled monitoring of stimuli such as the solution temperature and ionic strength, but also of association/disassociation mechanisms with other macromolecules or of biochemical binding events through remarkable changes in their fluorescence properties. For instance, an aqueous ratiometric dual sensor for temperature and salts was developed, relying on the delicate supramolecular assembly of a thermoresponsive copolymer with a thiophene-based conjugated polyelectrolyte. Alternatively, by taking advantage of the sensitivity of solvatochromic fluorophores, an increase in solution temperature or the presence of analytes was signaled as an enhancement of the fluorescence intensity. A simultaneous use of the sensitivity of chains towards the temperature and a specific antibody allowed monitoring of more complex phenomena such as competitive binding of analytes. The use of different thermoresponsive polymers, namely poly(N-isopropylacrylamide) and poly(meth)acrylates bearing oligo(ethylene glycol) side chains, revealed that the responsive polymers differed widely in their ability to perform a particular sensing function. In order to address questions regarding the impact of the chemical structure of the host polymer on the sensing performance, the macromolecular assembly behavior below and above the phase transition temperature was evaluated by a combination of fluorescence and light scattering methods. It was found that although the temperature-triggered changes in the macroscopic absorption characteristics were similar for these polymers, properties such as the degree of hydration or the extent of interchain aggregations differed substantially. Therefore, in addition to the demonstration of strategies for fluorescence-based sensing with thermoresponsive polymers, this work highlights the role of the chemical structure of the two popular thermoresponsive polymers on the fluorescence response. The results are fundamentally important for the rational choice of polymeric materials for a specific sensing strategy.
New polymers and low molecular compounds, suitable for organic light emitting devices and organic electronic applications, have been synthesised in this years in order to obtain electron transport characteristics compatible with requirements for applications in real plastic devices. However, despite of the technological importance and of the relevant progress in devices manufacture, fundamental physical properties of such class of materials are still not enough studied. In particular extensive presence of distributions of localised states inside the band gap has a deep impact on their electronic properties. Such presence of shallow traps as well as the influence of the sample preparation conditions on deep and shallow localised states have not been, until now, systematically explored. The thermal techniques are powerful tools in order to study localised levels in inorganic and organic materials. Thermally stimulated luminescence (TSL), thermally stimulated currents (TSC) and thermally stimulated depolarisation currents (TSDC) allow to deeply look to shallow and deep trap levels as well as they permit to study, in synergy with dielectric spectroscopy (DES), polarisation and depolarisation effects. We studied, by means of numerical simulations, the first and the second order kinetic equations characterised by negligible and strong re-trapping respectively. We included in the equations Gaussian, exponential and quasi-continuous distributions of localised states. The shapes of the theoretical peaks have been investigated by means of systematic variation of the two main parameters of the equations, i. e. the energy trap depth E and the frequency factor a and of the parameters regulating the distributions, in particular for a Gaussian distribution the distribution width s and the integration limits. The theoretical findings have been applied to experimental glow curves. Thin films of polymers and low molecular compounds. Polyphenylquinoxalines, trisphenylquinoxalines and oxadiazoles, studied because of their technological relevance, show complex thermograms, having several levels of localised states and depolarisation peaks. In particular well ordered films of an amphiphilic substituted 2-(p-nitrophenyl)-5-(p-undecylamidophenyl)-1,3,4-oxadiazole (NADPO) are characterised by rich TSL thermograms. A wide region of shallow traps, localised at Em = 4 meV, has been successfully fit by means of a first order kinetic equation having a Gaussian distribution of localised states. Two further peaks, having a different origin, have been characterised. The peaks at Tm = 221.5 K and Tm = 254.2 have activation energy of Em= 0.63 eV and Em = 0.66 eV, frequency factor s = 2.4x1012 s-1 and s = 1.85x1011 s-1, distribution width s = 0.045 eV and s = 0.088 eV respectively. Increasing the number of thermal cycle, a peak, probably connected with structural defects, appears at Tm = 197.7 K. The numerical analysis of this peak was performed by means of a first order equation containing a Gaussian distribution of traps. The activation energy of the trap level is centred at Em = 0.55 eV. The distribution is perfectly symmetric with a quite small width s = 0.028 eV. The frequency factor is s = 1.15 x 1012 s-1, resulting of the same order of magnitude of its neighbour peak at Tm = 221.5 K, having both, probably, the same origin. Furthermore the work demonstrates that the shape of the glow curves is strongly influenced by the excitation temperature and by the thermal cycles. For that reason Gaussian distributions of localised states can be confused with exponential distributions if the previous thermal history of the samples is not adequately considered.
Stars under influence: evidence of tidal interactions between stars and substellar companions
(2023)
Tidal interactions occur between gravitationally bound astrophysical bodies. If their spatial separation is sufficiently small, the bodies can induce tides on each other, leading to angular momentum transfer and altering of evolutionary path the bodies would have followed if they were single objects. The tidal processes are well established in the Solar planet-moon systems and close stellar binary systems. However, how do stars behave if they are orbited by a substellar companion (e.g. a planet or a brown dwarf) on a tight orbit?
Typically, a substellar companion inside the corotation radius of a star will migrate toward the star as it loses orbital angular momentum. On the other hand, the star will gain angular momentum which has the potential to increase its rotation rate. The effect should be more pronounced if the substellar companion is more massive. As the stellar rotation rate and the magnetic activity level are coupled, the star should appear more magnetically active under the tidal influence of the orbiting substellar companion. However, the difficulty in proving that a star has a higher magnetic activity level due to tidal interactions lies in the fact that (I) substellar companions around active stars are easier to detect if they are more massive, leading to a bias toward massive companions around active stars and mimicking the tidal interaction effect, and that (II) the age of a main-sequence star cannot be easily determined, leaving the possibility that a star is more active due to its young age.
In our work, we overcome these issues by employing wide stellar binary systems where one star hosts a substellar companion, and where the other star provides the magnetic activity baseline for the host star, assuming they have coevolved, and thereby provides the host's activity level if tidal interactions have no effect on it. Firstly, we find that extrasolar planets can noticeably increase the host star's X-ray luminosity and that the effect is more pronounced if the exoplanet is at least Jupiter-like in mass and close to the star. Further, we find that a brown dwarf will have an even stronger effect, as expected, and that the X-ray surface flux difference between the host star and the wide stellar companion is a significant outlier when compared to a large sample of similar wide binary systems without any known substellar companions. This result proves that substellar hosting wide binary systems can be good tools to reveal the tidal effect on host stars, and also show that the typical stellar age indicators as activity or rotation cannot be used for these stars. Finally, knowing that the activity difference is a good tracer of the substellar companion's tidal impact, we develop an analytical method to calculate the modified tidal quality factor Q' of individual host stars, which defines the tidal dissipation efficiency in the convective envelope of a given main-sequence star.
Flares are magnetically driven explosions that occur in the atmospheres of all main sequence stars that possess an outer convection zone. Flaring activity is rooted in the magnetic dynamo that operates deep in the stellar interior, propagates through all layers of the atmosphere from the corona to the photosphere, and emits electromagnetic radiation from radio bands to X-ray. Eventually, this radiation, and associated eruptions of energetic particles, are ejected out into interplanetary space, where they impact planetary atmospheres, and dominate the space weather environments of young star-planet systems.
Thanks to the Kepler and the Transit Exoplanet Survey Satellite (TESS) missions, flare observations have become accessible for millions of stars and star-planet systems. The goal of this thesis is to use these flares as multifaceted messengers to understand stellar magnetism across the main sequence, investigate planetary habitability, and explore how close-in planets can affect the host star.
Using space based observations obtained by the Kepler/K2 mission, I found that flaring activity declines with stellar age, but this decline crucially depends on stellar mass and rotation. I calibrated the age of the stars in my sample using their membership in open clusters from zero age main sequence to solar age. This allowed me to reveal the rapid transition from an active, saturated flaring state to a more quiescent, inactive flaring behavior in early M dwarfs at about 600-800 Myr. This result is an important observational constraint on stellar activity evolution that I was able to de-bias using open clusters as an activity-independent age indicator.
The TESS mission quickly superseded Kepler and K2 as the main source of flares in low mass M dwarfs. Using TESS 2-minute cadence light curves, I developed a new technique for flare localization and discovered, against the commonly held belief, that flares do not occur uniformly across their stellar surface: In fast rotating fully convective stars, giant flares are preferably located at high latitudes. This bears implications for both our understanding of magnetic field emergence in these stars, and the impact on the exoplanet atmospheres: A planet that orbits in the equatorial plane of its host may be spared from the destructive effects of these poleward emitting flares.
AU Mic is an early M dwarf, and the most actively flaring planet host detected to date. Its innermost companion, AU Mic b is one of the most promising targets for a first observation of flaring star-planet interactions. In these interactions, the planet influences the star, as opposed to space weather, where the planet is always on the receiving side. The effect reflects the properties of the magnetosphere shared by planet and star, as well as the so far inaccessible magnetic properties of planets. In the about 50 days of TESS monitoring data of AU Mic, I searched for statistically robust signs of flaring interactions with AU Mic b as flares that occur in surplus of the star's intrinsic activity. I found the strongest yet still marginal signal in recurring excess flaring in phase with the orbital period of AU Mic b. If it reflects true signal, I estimate that extending the observing time by a factor of 2-3 will yield a statistically significant detection. Well within the reach of future TESS observations, this additional data may bring us closer to robustly detecting this effect than we have ever been.
This thesis demonstrates the immense scientific value of space based, long baseline flare monitoring, and the versatility of flares as a carrier of information about the magnetism of star-planet systems. Many discoveries still lay in wait in the vast archives that Kepler and TESS have produced over the years. Flares are intense spotlights into the magnetic structures in star-planet systems that are otherwise far below our resolution limits. The ongoing TESS mission, and soon PLATO, will further open the door to in-depth understanding of small and dynamic scale magnetic fields on low mass stars, and the space weather environment they effect.
Dark matter, DM, has not yet been directly observed, but it has a very solid theoretical basis. There are observations that provide indirect evidence, like galactic rotation curves that show that the galaxies are rotating too fast to keep their constituent parts, and galaxy clusters that bends the light coming from behind-lying galaxies more than expected with respect to the mass that can be calculated from what can be visibly seen. These observations, among many others, can be explained with theories that include DM. The missing piece is to detect something that can exclusively be explained by DM. Direct observation in a particle accelerator is one way and indirect detection using telescopes is another. This thesis is focused on the latter method.
The Very Energetic Radiation Imaging Telescope Array System, V ERITAS, is a telescope array that detects Cherenkov radiation. Theory predicts that DM particles annihilate into, e.g., a γγ pair and create a distinctive energy spectrum when detected by such telescopes, e.i., a monoenergetic line at the same energy as the particle mass. This so called ”smoking-gun” signature is sought with a sliding window line search within the sub-range ∼ 0.3 − 10 TeV of the VERITAS energy range, ∼ 0.01 − 30 TeV.
Standard analysis within the VERITAS collaboration uses Hillas analysis and look-up tables, acquired by analysing particle simulations, to calculate the energy of the particle causing the Cherenkov shower. In this thesis, an improved analysis method has been used. Modelling each shower as a 3Dgaussian should increase the energy recreation quality. Five dwarf spheroidal galaxies were chosen as targets with a total of ∼ 224 hours of data. The targets were analysed individually and stacked. Particle simulations were based on two simulation packages, CARE and GrISU.
Improvements have been made to the energy resolution and bias correction, up to a few percent each, in comparison to standard analysis. Nevertheless, no line with a relevant significance has been detected. The most promising line is at an energy of ∼ 422 GeV with an upper limit cross section of 8.10 · 10^−24 cm^3 s^−1 and a significance of ∼ 2.73 σ, before trials correction and ∼ 1.56 σ after. Upper limit cross sections have also been calculated for the γγ annihilation process and four other outcomes. The limits are in line with current limits using other methods, from ∼ 8.56 · 10^−26 − 6.61 · 10^−23 cm^3s^−1. Future larger telescope arrays, like the upcoming Cherenkov Telescope Array, CTA, will provide better results with the help of this analysis method.
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
Active Galactic Nuclei (AGN) are powered by gas accretion onto supermassive Black Holes (BH). The luminosity of AGN can exceed the integrated luminosity of their host galaxies by orders of magnitude, which are then classified as Quasi-Stellar Objects (QSOs). Some mechanisms are needed to trigger the nuclear activity in galaxies and to feed the nuclei with gas. Among several possibilities, such as gravitational interactions, bar instabilities, and smooth gas accretion from the environment, the dominant process has yet to be identified. Feedback from AGN may be important an important ingredient of the evolution of galaxies. However, the details of this coupling between AGN and their host galaxies remain unclear. In this work we aim to investigate the connection between the AGN and their host galaxies by studying the properties of the extendend ionised gas around AGN. Our study is based on observations of ~50 luminous, low-redshift (z<0.3) QSOs using the novel technique of integral field spectroscopy that combines imaging and spectroscopy. After spatially separating the emission of AGN-ionised gas from HII regions, ionised solely by recently formed massive stars, we demonstrate that the specific star formation rates in several disc-dominated AGN hosts are consistent with those of normal star forming galaxies, while others display no detectable star formation activity. Whether the star formation has been actively suppressed in those particular host galaxies by the AGN, or their gas content is intrinsically low, remains an open question. By studying the kinematics of the ionised gas, we find evidence for non-gravitational motions and outflows on kpc scales only in a few objects. The gas kinematics in the majority of objects however indicate a gravitational origin. It suggests that the importance of AGN feedback may have been overrated in theoretical works, at least at low redshifts. The [OIII] line is the strongest optical emission line for AGN-ionised gas, which can be extended over several kpc scales, usually called the Narrow-Line Region (NLR). We perform a systematic investigation of the NLR size and determine a NLR size-luminosity relation that is consistent with the scenario of a constant ionisation parameter throughout the NLR. We show that previous narrow-band imaging with the Hubble Space Telescope underestimated the NLR size by a factor of >2 and that the continuum AGN luminosity is better correlated with the NLR size than the [OIII] luminosity. These affects may account for the different NLR size-luminosity relations reported in previous studies. On the other hand, we do not detect extended NLRs around all QSOs, and demonstrate that the detection of extended NLRs goes along with radio emission. We employ emission line ratios as a diagnostic for the abundance of heavy elements in the gas, i.e. its metallicity, and find that the radial metallicity gradients are always flatter than in inactive disc-dominated galaxies. This can be interpreted as evidence for radial gas flows from the outskirts of these galaxies to the nucleus. Recent or ongoing galaxy interactions are likely responsible for this effect and may turn out to be a common prerequisite for QSO activity. The metallicity of bulge-dominated hosts are systematically lower than their disc-dominated counterparts, which we interpret as evidence for minor mergers, supported by our detailed study of the bulge-dominated host of the luminous QSO HE 1029-1401, or smooth gas accretion from the environment. In this line another new discovery is that HE 2158-0107 at z=0.218 is the most metal poor luminous QSO ever observed. Together with a large (30kpc) extended structure of low metallicity ionised gas, we propose smooth cold gas accretion as the most likely scenario. Theoretical studies suggested that this process is much more important at earlier epochs of the universe, so that HE 2158-0107 might be an ideal laboratory to study this mechanism of galaxy and BH growth at low redshift more detailed in the furture.
Organic solar cells (OSCs), in recent years, have shown high efficiencies through the development of novel non-fullerene acceptors (NFAs). Fullerene derivatives have been the centerpiece of the accepting materials used throughout organic photovoltaic (OPV) research. However, since 2015 novel NFAs have been a game-changer and have overtaken fullerenes. However, the current understanding of the properties of NFAs for OPV is still relatively limited and critical mechanisms defining the performance of OPVs are still topics of debate.
In this thesis, attention is paid to understanding reduced-Langevin recombination with respect to the device physics properties of fullerene and non-fullerene systems. The work is comprised of four closely linked studies. The first is a detailed exploration of the fill factor (FF) expressed in terms of transport and recombination properties in a comparison of fullerene and non-fullerene acceptors. We investigated the key reason behind the reduced FF in the NFA (ITIC-based) devices which is faster non-geminate recombination relative to the fullerene (PCBM[70]-based) devices. This is then followed by a consideration of a newly synthesized NFA Y-series derivative which exhibits the highest power conversion efficiency for OSC at the time. Such that in the second study, we illustrated the role of disorder on the non-geminate recombination and charge extraction of thick NFA (Y6-based) devices. As a result, we enhanced the FF of thick PM6:Y6 by reducing the disorder which leads to suppressing the non-geminate recombination toward non-Langevin system. In the third work, we revealed the reason behind thickness independence of the short circuit current of PM6:Y6 devices, caused by the extraordinarily long diffusion length of Y6. The fourth study entails a broad comparison of a selection of fullerene and non-fullerene blends with respect to charge generation efficiency and recombination to unveil the importance of efficient charge generation for achieving reduced recombination.
I employed transient measurements such as Time Delayed Collection Field (TDCF), Resistance dependent Photovoltage (RPV), and steady-state techniques such as Bias Assisted Charge Extraction (BACE), Temperature-Dependent Space Charge Limited Current (T-SCLC), Capacitance-Voltage (CV), and Photo-Induce Absorption (PIA), to analyze the OSCs.
The outcomes in this thesis together draw a complex picture of multiple factors that affect reduced-Langevin recombination and thereby the FF and overall performance. This provides a suitable platform for identifying important parameters when designing new blend systems. As a result, we succeeded to improve the overall performance through enhancing the FF of thick NFA device by adjustment of the amount of the solvent additive in the active blend solution. It also highlights potentially critical gaps in the current experimental understanding of fundamental charge interaction and recombination dynamics.
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.
Gamma-ray astronomy has proven to provide unique insights into cosmic-ray accelerators in the past few decades. By combining information at the highest photon energies with the entire electromagnetic spectrum in multi-wavelength studies, detailed knowledge of non-thermal particle populations in astronomical objects and systems has been gained: Many individual classes of gamma-ray sources could be identified inside our galaxy and outside of it. Different sources were found to exhibit a wide range of temporal evolution, ranging from seconds to stable behaviours over many years of observations. With the dawn of both neutrino- and gravitational wave astronomy, additional messengers have come into play over the last years. This development presents the advent of multi-messenger astronomy: a novel approach not only to search for sources of cosmic rays, but for astronomy in general.
In this thesis, both traditional multi-wavelength studies and multi-messenger studies will be presented. They were carried out with the H.E.S.S. experiment, an imaging air Cherenkov telescope array located in the Khomas Highland of Namibia. H.E.S.S. has entered its second phase in 2012 with the addition of a large, fifth telescope. While the initial array was limited to the study of gamma-rays with energies above 100 GeV, the new instrument allows to access gamma-rays with energies down to a few tens of GeV. Strengths of the multi-wavelength approach will be demonstrated at the example of the galaxy NGC253, which is undergoing an episode of enhanced star-formation. The gamma-ray emission will be discussed in light of all the information on this system available from radio, infrared and X-rays. These wavelengths reveal detailed information on the population of supernova remnants, which are suspected cosmic-ray accelerators. A broad-band gamma-ray spectrum is derived from H.E.S.S. and Fermi-LAT data. The improved analysis of H.E.S.S. data provides a measurement which is no longer dominated by systematic uncertainties. The long-term behaviour of cosmic rays in the starburst galaxy NGC253 is finally characterised.
In contrast to the long time-scale evolution of a starburst galaxy, multi-messenger studies are especially intriguing when shorter time-scales are being probed. A prime example of a short time-scale transient are Gamma Ray Bursts. The efforts to understand this phenomenon effectively founded the branch of gamma-ray astronomy. The multi-messenger approach allows for the study of illusive phenomena such as Gamma Ray Bursts and other transients using electromagnetic radiation, neutrinos, cosmic rays and gravitational waves contemporaneously. With contemporaneous observations getting more important just recently, the execution of such observation campaigns still presents a big challenge due to the different limitations and strengths of the infrastructures.
An alert system for transient phenomena has been developed over the course of this thesis for H.E.S.S. It aims to address many follow-up challenges in order to maximise the science return of the new large telescope, which is able to repoint much faster than the initial four telescopes. The system allows for fully automated observations based on scientific alerts from any wavelength or messenger and allows H.E.S.S. to participate in multi-messenger campaigns. Utilising this new system, many interesting multi-messenger observation campaigns have been performed. Several highlight observations with H.E.S.S. are analysed, presented and discussed in this work. Among them are observations of Gamma Ray Bursts with low latency and low energy threshold, the follow-up of a neutrino candidate in spatial coincidence with a flaring active galactic nucleus and of the merger of two neutron stars, which was revealed by the coincidence of gravitational waves and a Gamma-Ray Burst.
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
Movement and navigation are essential for many organisms during some parts of their lives. This is also true for bacteria, which can move along surfaces and swim though liquid environments. They are able to sense their environment, and move towards environmental cues in a directed fashion.
These abilities enable microbial lifecyles in biofilms, improved food uptake, host infection, and many more. In this thesis we study aspects of the swimming movement - or motility - of the soil bacterium (P. putida). Like most bacteria, P. putida swims by rotating its helical flagella, but their arrangement differs from the main model organism in bacterial motility research: (E. coli). P. putida is known for its intriguing motility strategy, where fast and slow episodes can occur after each other. Up until now, it was not known how these two speeds can be produced, and what advantages they might confer to this bacterium.
Normally the flagella, the main component of thrust generation in bacteria, are not observable by ordinary light microscopy. In order to elucidate this behavior, we therefore used a fluorescent staining technique on a mutant strain of this species to specifically label the flagella, while leaving the cell body only faintly stained. This allowed us to image the flagella of the swimming bacteria with high spacial and temporal resolution with a customized high speed fluorescence microscopy setup. Our observations show that P. putida can swim in three different modes. First, It can swim with the flagella pushing the cell body, which is the main mode of swimming motility previously known from other bacteria. Second, it can swim with the flagella pulling the cell body, which was thought not to be possible in situations with multiple flagella. Lastly, it can wrap its flagellar bundle around the cell body, which results in a speed wich is slower by a factor of two. In this mode, the flagella are in a different physical conformation with a larger radius so the cell body can fit inside. These three swimming modes explain the previous observation of two speeds, as well as the non strict alternation of the different speeds.
Because most bacterial swimming in nature does not occur in smoothly walled glass enclosures under a microscope, we used an artificial, microfluidic, structured system of obstacles to study the motion of our model organism in a structured environment. Bacteria were observed in microchannels with cylindrical obstacles of different sizes and with different distances with video microscopy and cell tracking. We analyzed turning angles, run times, and run length, which we compared to a minimal model for movement in structured geometries. Our findings show that hydrodynamic interactions with the walls lead to a guiding of the bacteria along obstacles. When comparing the observed behavior with the statics of a particle that is deflected with every obstacle contact, we find that cells run for longer distances than that model.
Navigation in chemical gradients is one of the main applications of motility in bacteria. We studied the swimming response of P. putida cells to chemical stimuli (chemotaxis) of the common food preservative sodium benzoate. Using a microfluidic gradient generation device, we created gradients of varying strength, and observed the motion of cells with a video microscope and subsequent cell tracking. Analysis of different motility parameters like run lengths and times, shows that P. putida employs the classical chemotaxis strategy of E. coli: runs up the gradient are biased to be longer than those down the gradient. Using the two different run speeds we observed due to the different swimming modes, we classify runs into `fast' and `slow' modes with a Gaussian mixture model (GMM). We find no evidence that P. putida's uses its swimming modes to perform chemotaxis.
In most studies of bacterial motility, cell tracking is used to gather trajectories of individual swimming cells. These trajectories then have to be decomposed into run sections and tumble sections. Several algorithms have been developed to this end, but most require manual tuning of a number of parameters, or extensive measurements with chemotaxis mutant strains. Together with our collaborators, we developed a novel motility analysis scheme, based on generalized Kramers-Moyal-coefficients. From the underlying stochastic model, many parameters like run length etc., can be inferred by an optimization procedure without the need for explicit run and tumble classification. The method can, however, be extended to a fully fledged tumble classifier. Using this method, we analyze E. coli chemotaxis measurements in an aspartate analog, and find evidence for a chemotactic bias in the tumble angles.
Most of the baryonic matter in the Universe resides in a diffuse gaseous phase in-between galaxies consisting mostly of hydrogen and helium. This intergalactic medium (IGM) is distributed in large-scale filaments as part of the overall cosmic web. The luminous extragalactic objects that we can observe today, such as galaxies and quasars, are surrounded by the IGM in the most dense regions within the cosmic web. The radiation of these objects contributes to the so-called ultraviolet background (UVB) which keeps the IGM highly ionized ever since the epoch of reionization.
Measuring the amount of absorption due to intergalactic neutral hydrogen (HI) against extragalactic background sources is a very useful tool to constrain the energy input of ionizing sources into the IGM. Observations suggest that the HI Lyman-alpha effective optical depth, τ_eff, decreases with decreasing redshift, which is primarily due to the expansion of the Universe. However, some studies find a smaller value of the effective optical depth than expected at the specific redshift z~3.2, possibly related to the complete reionization of helium in the IGM and a hardening of the UVB. The detection and possible cause of a decrease in τ_eff at z~3.2 is controversially debated in the literature and the observed features need further explanation.
To better understand the properties of the mean absorption at high redshift and to provide an answer for whether the detection of a τ_eff feature is real we study 13 high-resolution, high signal-to-noise ratio quasar spectra observed with the Ultraviolet and Visual Echelle Spectrograph (UVES) at the Very Large Telescope (VLT). The redshift evolution of the effective optical depth, τ_eff(z), is measured in the redshift range 2.7≤z≤3.6. The influence of metal absorption features is removed by performing a comprehensive absorption-line-fitting procedure.
In the first part of the thesis, a line-parameter analysis of the column density, N, and Doppler parameter, b, of ≈7500 individually fitted absorption lines is performed. The results are in good agreement with findings from previous surveys.
The second (main) part of this thesis deals with the analysis of the redshift evolution of the effective optical depth. The τ_eff measurements vary around the empirical power law τ_eff(z)~(1+z)^(γ+1) with γ=2.09±0.52. The same analysis as for the observed spectra is performed on synthetic absorption spectra. From a comparison between observed and synthetic spectral data it can be inferred that the uncertainties of the τ_eff values are likely underestimated and that the scatter is probably caused by high-column-density absorbers with column densities in the range 15≤logN≤17. In the real Universe, such absorbers are rarely observed, however. Hence, the difference in τ_eff from different observational data sets and absorption studies is most likely caused by cosmic variance. If, alternatively, the disagreement between such data is a result of an too optimistic estimate of the (systematic) errors, it is also possible that all τ_eff measurements agree with a smooth evolution within the investigated redshift range. To explore in detail the different analysis techniques of previous studies an extensive literature comparison to the results of this work is presented in this thesis.
Although a final explanation for the occurrence of the τ_eff deviation in different studies at z~3.2 cannot be given here, our study, which represents the most detailed line-fitting analysis of its kind performed at the investigated redshifts so far, represents another important benchmark for the characterization of the HI Ly-alpha effective optical depth at high redshift and its indicated unusual behavior at z~3.2.
Structural dynamics of photoexcited nanolayered perovskites studied by ultrafast x-ray diffraction
(2012)
This publication-based thesis represents a contribution to the active research field of ultrafast structural dynamics in laser-excited nanostructures. The investigation of such dynamics is mandatory for the understanding of the various physical processes on microscopic scales in complex materials which have great potentials for advances in many technological applications. I theoretically and experimentally examine the coherent, incoherent and anharmonic lattice dynamics of epitaxial metal-insulator heterostructures on timescales ranging from femtoseconds up to nanoseconds. To infer information on the transient dynamics in the photoexcited crystal lattices experimental techniques using ultrashort optical and x-ray pulses are employed. The experimental setups include table-top sources as well as large-scale facilities such as synchrotron sources. At the core of my work lies the development of a linear-chain model to simulate and analyze the photoexcited atomic-scale dynamics. The calculated strain fields are then used to simulate the optical and x-ray response of the considered thin films and multilayers in order to relate the experimental signatures to particular structural processes. This way one obtains insight into the rich lattice dynamics exhibiting coherent transport of vibrational energy from local excitations via delocalized phonon modes of the samples. The complex deformations in tailored multilayers are identified to give rise to highly nonlinear x-ray diffraction responses due to transient interference effects. The understanding of such effects and the ability to precisely calculate those are exploited for the design of novel ultrafast x-ray optics. In particular, I present several Phonon Bragg Switch concepts to efficiently generate ultrashort x-ray pulses for time-resolved structural investigations. By extension of the numerical models to include incoherent phonon propagation and anharmonic lattice potentials I present a new view on the fundamental research topics of nanoscale thermal transport and anharmonic phonon-phonon interactions such as nonlinear sound propagation and phonon damping. The former issue is exemplified by the time-resolved heat conduction from thin SrRuO3 films into a SrTiO3 substrate which exhibits an unexpectedly slow heat conductivity. Furthermore, I discuss various experiments which can be well reproduced by the versatile numerical models and thus evidence strong lattice anharmonicities in the perovskite oxide SrTiO3. The thesis also presents several advances of experimental techniques such as time-resolved phonon spectroscopy with optical and x-ray photons as well as concepts for the implementation of x-ray diffraction setups at standard synchrotron beamlines with largely improved time-resolution for investigations of ultrafast structural processes. This work forms the basis for ongoing research topics in complex oxide materials including electronic correlations and phase transitions related to the elastic, magnetic and polarization degrees of freedom.
The Lyman-𝛼 (Ly𝛼) line commonly assists in the detection of high-redshift galaxies, the so-called Lyman-alpha emitters (LAEs). LAEs are useful tools to study the baryonic matter distribution of the high-redshift universe. Exploring their spatial distribution not only reveals the large-scale structure of the universe at early epochs, but it also provides an insight into the early formation and evolution of the galaxies we observe today. Because dark matter halos (DMHs) serve as sites of galaxy formation, the LAE distribution also traces that of the underlying dark matter. However, the details of this relation and their co-evolution over time remain unclear. Moreover, theoretical studies predict that the spatial distribution of LAEs also impacts their own circumgalactic medium (CGM) by influencing their extended Ly𝛼 gaseous halos (LAHs), whose origin is still under investigation. In this thesis, I make several contributions to improve the knowledge on these fields using samples of LAEs observed with the Multi Unit Spectroscopic Explorer (MUSE) at redshifts of 3 < 𝑧 < 6.
The goal of this thesis was to thoroughly investigate the behavior of multimode fibres to aid the development of modern and forthcoming fibre-fed spectrograph systems. Based on the Eigenmode Expansion Method, a field propagation model was created that can emulate effects in fibres relevant for astronomical spectroscopy, such as modal noise, scrambling, and focal ratio degradation. These effects are of major concern for any fibre-coupled spectrograph used in astronomical research. Changes in the focal ratio, modal distribution of light or non-perfect scrambling limit the accuracy of measurements, e.g. the flux determination of the astronomical object, the sky-background subtraction and detection limit for faint galaxies, or the spectral line position accuracy used for the detection of extra-solar planets.
Usually, fibres used for astronomical instrumentation are characterized empirically through tests. The results of this work allow to predict the fibre behaviour under various conditions using sophisticated software tools to simulate the waveguide behaviour and mode transport of fibres.
The simulation environment works with two software interfaces. The first is the mode solver module FemSIM from Rsoft. It is used to calculate all the propagation modes and effective refractive indexes of a given system. The second interface consists of Python scripts which enable the simulation of the near- and far-field outputs of a given fibre. The characteristics of the input field can be manipulated to emulate real conditions. Focus variations, spatial translation, angular fluctuations, and disturbances through the mode coupling factor can also be simulated.
To date, complete coherent propagation or complete incoherent propagation can be simulated. Partial coherence was not addressed in this work. Another limitation of the simulations is that they work exclusively for the monochromatic case and that the loss coefficient of the fibres is not considered. Nevertheless, the simulations were able to match the results of realistic measurements.
To test the validity of the simulations, real fibre measurements were used for comparison. Two fibres with different cross-sections were characterized. The first fibre had a circular cross-section, and the second one had an octagonal cross-section. The utilized test-bench was originally developed for the prototype fibres of the 4MOST fibre feed characterization. It allowed for parallel laser beam measurements, light cone measurements, and scrambling measurements. Through the appropriate configuration, the acquisition of the near- and/or far-field was feasible.
By means of modal noise analysis, it was possible to compare the near-field speckle patterns of simulations and measurements as a function of the input angle. The spatial frequencies that originate from the modal interference could be analyzed by using the power spectral density analysis. Measurements and simulations yielded similar results. Measurements with induced modal scrambling were compared to simulations using incoherent propagation and once again similar results were achieved. Through both measurements and simulations, the enlargement of the near-field distribution could be observed and analyzed. The simulations made it possible to explain incoherent intensity fluctuations that appear in real measurements due to the field distribution of the active propagation modes.
By using the Voigt analysis in the far-field distribution, it was possible to separate the modal diffusion component in order to compare it with the simulations. Through an appropriate assessment, the modal diffusion component as a function of the input angle could be translated into angular divergence. The simulations gave the minimal angular divergence of the system. Through the mean of the difference between simulations and measurements, a figure of merit is given which can be used to characterize the angular divergence of real fibres using the simulations. Furthermore, it was possible to simulate light cone measurements. Due to the overall consistent results, it can be stated that the simulations represent a good tool to assist the fibre characterization process for fibre-fed spectrograph systems.
This work was possible through the BMBF Grant 05A14BA1 which was part of the phase A study of the fibre system for MOSAIC, a multi-object spectrograph for the Extremely Large Telescope (ELT-MOS).
Today, it is well known that galaxies like the Milky Way consist not only of stars but also of gas and dust. The galactic halo, a sphere of gas that surrounds the stellar disk of a galaxy, is especially interesting. It provides a wealth of information about in and outflowing gaseous material towards and away from galaxies and their hierarchical evolution. For the Milky Way, the so-called high-velocity clouds (HVCs), fast moving neutral gas complexes in the halo that can be traced by absorption-line measurements, are believed to play a crucial role in the overall matter cycle in our Galaxy. Over the last decades, the properties of these halo structures and their connection to the local circumgalactic and intergalactic medium (CGM and IGM, respectively) have been investigated in great detail by many different groups. So far it remains unclear, however, to what extent the results of these studies can be transferred to other galaxies in the local Universe. In this thesis, we study the absorption properties of Galactic HVCs and compare the HVC absorption characteristics with those of intervening QSO absorption-line systems at low redshift. The goal of this project is to improve our understanding of the spatial extent and physical conditions of gaseous galaxy halos in the local Universe. In the first part of the thesis we use HST /STIS ultraviolet spectra of more than 40 extragalactic background sources to statistically analyze the absorption properties of the HVCs in the Galactic halo. We determine fundamental absorption line parameters including covering fractions of different weakly/intermediately/highly ionized metals with a particular focus on SiII and MgII. Due to the similarity in the ionization properties of SiII and MgII, we are able to estimate the contribution of HVC-like halo structures to the cross section of intervening strong MgII absorbers at z = 0. Our study implies that only the most massive HVCs would be regarded as strong MgII absorbers, if the Milky Way halo would be seen as a QSO absorption line system from an exterior vantage point. Combining the observed absorption-cross section of Galactic HVCs with the well-known number density of intervening strong MgII absorbers at z = 0, we conclude that the contribution of infalling gas clouds (i.e., HVC analogs) in the halos of Milky Way-type galaxies to the cross section of strong MgII absorbers is 34%. This result indicates that only about one third of the strong MgII absorption can be associated with HVC analogs around other galaxies, while the majority of the strong MgII systems possibly is related to galaxy outflows and winds. The second part of this thesis focuses on the properties of intervening metal absorbers at low redshift. The analysis of the frequency and physical conditions of intervening metal systems in QSO spectra and their relation to nearby galaxies offers new insights into the typical conditions of gaseous galaxy halos. One major aspect in our study was to regard intervening metal systems as possible HVC analogs. We perform a detailed analysis of absorption line properties and line statistics for 57 metal absorbers along 78 QSO sightlines using newly-obtained ultraviolet spectra obtained with HST /COS. We find clear evidence for bimodal distribution in the HI column density in the absorbers, a trend that we interpret as sign for two different classes of absorption systems (with HVC analogs at the high-column density end). With the help of the strong transitions of SiII λ1260, SiIII λ1206, and CIII λ977 we have set up Cloudy photoionization models to estimate the local ionization conditions, gas densities, and metallicities. We find that the intervening absorption systems studied by us have, on average, similar physical conditions as Galactic HVC absorbers, providing evidence that many of them represent HVC analogs in the vicinity of other galaxies. We therefore determine typical halo sizes for SiII, SiIII, and CIII for L = 0.01L∗ and L = 0.05L∗ galaxies. Based on the covering fractions of the different ions in the Galactic halo, we find that, for example, the typical halo size for SiIII is ∼ 160 kpc for L = 0.05L∗ galaxies. We test the plausibility of this result by searching for known galaxies close to the QSO sightlines and at similar redshifts as the absorbers. We find that more than 34% of the measured SiIII absorbers have galaxies associated with them, with the majority of the absorbers indeed being at impact parameters ρ ≤160 kpc.
In this thesis we use integral-field spectroscopy to detect and understand of Lyman α (Lyα) emission from high-redshift galaxies.
Intrinsically the Lyα emission at λ = 1216 Å is the strongest recombination line from galaxies. It arises from the 2p → 1s transition in hydrogen. In star-forming galaxies the line is powered by ionisation of the interstellar gas by hot O- and B- stars. Galaxies with star-formation rates of 1 - 10 Msol/year are expected to have Lyα luminosities of 42 dex - 43 dex (erg/s), corresponding to fluxes ~ -17 dex - -18 dex (erg/s/cm²) at redshifts z~3, where Lyα is easily accessible with ground-based telescopes. However, star-forming galaxies do not show these expected Lyα fluxes. Primarily this is a consequence of the high-absorption cross-section of neutral hydrogen for Lyα photons σ ~ -14 dex (cm²). Therefore, in typical interstellar environments Lyα photons have to undergo a complex radiative transfer. The exact conditions under which Lyα photons can escape a galaxy are poorly understood.
Here we present results from three observational projects. In Chapter 2, we show integral field spectroscopic observations of 14 nearby star-forming galaxies in Balmer α radiation (Hα, λ = 6562.8 Å). These observations were obtained with the Potsdam Multi Aperture Spectrophotometer at the Calar-Alto 3.5m Telescope}. Hα directly traces the intrinsic Lyα radiation field. We present Hα velocity fields and velocity dispersion maps spatially registered onto Hubble Space Telescope Lyα and Hα images. From our observations, we conjecture a causal connection between spatially resolved Hα kinematics and Lyα photometry for individual galaxies. Statistically, we find that dispersion-dominated galaxies are more likely to emit Lyα photons than galaxies where ordered gas-motions dominate. This result indicates that turbulence in actively star-forming systems favours an escape of Lyα radiation.
Not only massive stars can power Lyα radiation, but also non-thermal emission from an accreting super-massive black hole in the galaxy centre. If a galaxy harbours such an active galactic nucleus, the rate of hydrogen-ionising photons can be more than 1000 times higher than that of a typical star-forming galaxy. This radiation can potentially ionise large regions well outside the main stellar body of galaxies. Therefore, it is expected that the neutral hydrogen from these circum-galactic regions shines fluorescently in Lyα. Circum-galactic gas plays a crucial role in galaxy formation. It may act as a reservoir for fuelling star formation, and it is also subject to feedback processes that expel galactic material. If Lyα emission from this circum-galactic medium (CGM) was detected, these important processes could be studied in-situ around high-z galaxies. In Chapter 3, we show observations of five radio-quiet quasars with PMAS to search for possible extended CGM emission in the Lyα line. However, in four of the five objects, we find no significant traces of this emission. In the fifth object, there is evidence for a weak and spatially quite compact Lyα excess at several kpc outside the nucleus. The faintness of these structures is consistent with the idea that radio-quiet quasars typically reside in dark matter haloes of modest masses. While we were not able to detect Lyα CGM emission, our upper limits provide constraints for the new generation of IFS instruments at 8--10m class telescopes.
The Multi Unit Spectroscopic Explorer (MUSE) at ESOs Very Large Telescopeis such an unique instrument. One of the main motivating drivers in its construction was the use as a survey instrument for Lyα emitting galaxies at high-z. Currently, we are conducting such a survey that will cover a total area of ~100 square arcminutes with 1 hour exposures for each 1 square arcminute MUSE pointing. As a first result from this survey we present in Chapter 5 a catalogue of 831 emission-line selected galaxies from a 22.2 square arcminute region in the Chandra Deep Field South. In order to construct the catalogue, we developed and implemented a novel source detection algorithm -- LSDCat -- based on matched filtering for line emission in 3D spectroscopic datasets (Chapter 4). Our catalogue contains 237 Lyα emitting galaxies in the redshift range 3 ≲ z ≲ 6. Only four of those previously had spectroscopic redshifts in the literature. We conclude this thesis with an outlook on the construction of a Lyα luminosity function based on this unique sample (Chapter 6).
In festen azobenzenhaltigen Polymeren wurde bei Bestrahlung mit blauem Licht ein makroskopischer Materialtransport beobachtet. Um die Dynamik der Gitterentstehung zu verfolgen, wurde am Speicherring für Synchrotronstrahlung ein Gitterschreibaufbau errichtet. Damit konnte erstmals in dieser Arbeit die Gitterbildungsgeschwindigkeit in-situ simultan mit Röntgen- und Lichtstreuung untersucht werden. Mit Hilfe einer speziellen Anpassung der Röntgenstreutheorie konnten sehr gute Übereinstimmungen von theoretischen Berechnungen mit den Messergebnissen erzielt werden. Dabei konnte nachgewiesen werden, dass sich zeitgleich mit einem Oberflächengitter auch ein Dichtegitter entwickelt. Durch die Trennung beider Streuanteile ließ sich die Dynamik der Strukturentstehungen bestimmen. Des weiteren konnte erstmals mit Hilfe der Photoelektronenspektroskopie die molekulare Orientierung an der Oberfläche eines Oberflächengitters nachgewiesen werden. Die Bewegungsursache kann auf einen Impulsübertrag während der Isomerisierung zurückgeführt werden, während die Bewegungsrichtung durch den elektrischen Feldvektor festgelegt wird. Die Theorie der Gitterentstehung konnte verbessert werden.
Die Untersuchung mikrogelinster astronomischer Objekte ermöglicht es, Informationen über die Größe und Struktur dieser Objekte zu erhalten. Im ersten Teil dieser Arbeit werden die Spektren von drei gelinsten Quasare, die mit dem Potsdamer Multi Aperture Spectrophotometer (PMAS) erhalten wurden, auf Anzeichen für Mikrolensing untersucht. In den Spektren des Vierfachquasares HE 0435-1223 und des Doppelquasares HE 0047-1756 konnten Hinweise für Mikrolensing gefunden werden, während der Doppelquasar UM 673 (Q 0142--100) keine Anzeichen für Mikrolensing zeigt. Die Invertierung der Lichtkurve eines Mikrolensing-Kausik-Crossing-Ereignisses ermöglicht es, das eindimensionale Helligkeitsprofil der gelinsten Quelle zu rekonstruieren. Dies wird im zweiten Teil dieser Arbeit untersucht. Die mathematische Beschreibung dieser Aufgabe führt zu einer Volterra'schen Integralgleichung der ersten Art, deren Lösung ein schlecht gestelltes Problem ist. Zu ihrer Lösung wird in dieser Arbeit ein lokales Regularisierungsverfahren angewendet, das an die kausale Strukture der Volterra'schen Gleichung besser angepasst ist als die bisher verwendete Tikhonov-Phillips-Regularisierung. Es zeigt sich, dass mit dieser Methode eine bessere Rekonstruktion kleinerer Strukturen in der Quelle möglich ist. Weiterhin wird die Anwendbarkeit der Regularisierungsmethode auf realistische Lichtkurven mit irregulärem Sampling bzw. größeren Lücken in den Datenpunkten untersucht.
The present work investigates the structure formation and wetting in two dimensional (2D) Langmuir monolayer phases in local thermodynamic equilibrium. A Langmuir monolayer is an isolated 2D system of surfactants at the air/water interface. It exhibits crystalline, liquid crystalline, liquid and gaseous phases differing in positional and/or orientational order. Permanent electric dipole moments of the surfactants lead to a long range repulsive interaction and to the formation of mesoscopic patterns. An interaction model is used describing the structure formation as a competition between short range attraction (bare line tension) and long range repulsion (surface potentials) on a scale Delta. Delta has the meaning of a dividing length between the short and long range interaction. In the present work the thermodynamic equilibrium conditions for the shape of two phase boundary lines (Young-Laplace equation) and three phase intersection points (Young′s condition) are derived and applied to describe experimental data: The line tension is measured by pendant droplet tensiometry. The bubble shape and size of 2D foams is calculated numerically and compared to experimental foams. Contact angles are measured by fitting numerical solutions of the Young-Laplace equation on micron scale. The scaling behaviour of the contact angle allows to measure a lower limit for Delta. Further it is discussed, whether in biological membranes wetting transitions are a way in order to control reaction kinetics. Studies performed in our group are discussed with respect to this question in the framework of the above mentioned theory. Finally the apparent violation of Gibbs′ phase rule in Langmuir monolayers (non-horizontal plateau of the surface pressure/area-isotherm, extended three phase coexistence region in one component systems) is investigated quantitatively. It has been found that the most probable explanation are impurities within the system whereas finite size effects or the influence of the long range electrostatics can not explain the order of magnitude of the effect.
The field of gamma-ray astronomy opened a new window into the non-thermal universe that allows studying the acceleration sites of cosmic rays and the role of cosmic rays on evolutionary processes in galaxies. The detection of almost one hundred Galactic very-high-energy (VHE: 0.1−100TeV) gamma-ray sources in the Milky Way demonstrates that particle acceleration up to tens of TeV energies is a common phenomenon. Furthermore, the detection of VHE gamma rays from other galaxies has confirmed that cosmic rays are not exclusively accelerated in the Milky Way. The rapid development of gamma-ray astronomy in the past two decades has led to a transition from the detection and study of individual sources to source population studies. To answer the question, whether the VHE gamma-ray source population of the Milky Way is unique, observations of galaxies, for which individual sources can be resolved, are required. Such galaxies are the Magellanic Clouds, two satellite galaxies of the Milky Way, which have been surveyed by the H.E.S.S. experiment in the last decade. In this thesis, data from a total of 450 hours of H.E.S.S. observations towards the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are presented. During the analysis of the data sets, special emphasis is put on the evaluation of systematic uncertainties of the experiment in order to assure an unbiased flux estimation of the potential VHE gamma-ray sources of the Magellanic Clouds. A detailed analysis of the survey data revealed the detection of the gamma-ray binary LMCP3, the most powerful gamma-ray binary known so far, that is located in the LMC, and thus, increases the number of known VHE gamma-ray sources in the LMC to four. No other VHE gamma-ray source is detected in the Magellanic Clouds and integral flux upper limits are estimated. These flux upper limits are used to perform a source population study based on known VHE source classes and existing multi-wavelength catalogues. A comparison of the source populations of the Magellanic Clouds and the Milky Way revealed that no other source in the Magellanic Clouds is as bright as the most luminous VHE gamma-ray source in the LMC: the pulsar wind nebula N 157B, and that one-third of the source population of the Magellanic Clouds is less luminous than the other known VHE gamma-ray sources in the LMC. For only a couple of sources luminosity levels of Galactic VHE sources, that are more than one order of magnitude fainter than the detected sources in the LMC, are constrained. Based on the flux upper limits, differences on the TeV source populations in the Magellanic Clouds and the Milky Way as well as the importance of the source environments will be discussed.
This thesis discusses heat and charge transport phenomena in single-crystalline Silicon penetrated by nanometer-sized pores, known as mesoporous Silicon (pSi). Despite the extensive attention given to it as a thermoelectric material of interest, studies on microscopic thermal and electronic transport beyond its macroscopic characterizations are rarely reported. In contrast, this work reports the interplay of both.
PSi samples synthesized by electrochemical anodization display a temperature dependence of specific heat 𝐶𝑝 that deviates from the characteristic 𝑇^3 behaviour (at 𝑇<50𝐾). A thorough analysis reveals that both 3D and 2D Einstein and Debye modes contribute to this specific heat. Additional 2D Einstein modes (~3 𝑚𝑒𝑉) agree reasonably well with the boson peak of SiO2 in pSi pore walls. 2D Debye modes are proposed to account for surface acoustic modes causing a significant deviation from the well-known 𝑇^3 dependence of 𝐶𝑝 at 𝑇<50𝐾.
A novel theoretical model gives insights into the thermal conductivity of pSi in terms of porosity and phonon scattering on the nanoscale. The thermal conductivity analysis utilizes the peculiarities of the pSi phonon dispersion probed by the inelastic neutron scattering experiments. A phonon mean-free path of around 10 𝑛𝑚 extracted from the presented model is proposed to cause the reduced thermal conductivity of pSi by two orders of magnitude compared to p-doped bulk Silicon. Detailed analysis indicates that compound averaging may cause a further 10-50% reduction. The percolation threshold of 65% for thermal conductivity of pSi samples is subsequently determined by employing theoretical effective medium models.
Temperature-dependent electrical conductivity measurements reveal a thermally activated transport process. A detailed analysis of the activation energy 𝐸𝐴𝜎 in the thermally activated transport exhibits a Meyer Neldel compensation rule between different samples that originates in multi-phonon absorption upon carrier transport. Activation energies 𝐸𝐴𝑆 obtained from temperature-dependent thermopower measurements provide further evidence for multi-phonon assisted hopping between localized states as a dominant charge transport mechanism in pSi, as they systematically differ from the determined 𝐸𝐴𝜎 values.
With the implementation of intense, short pulsed light sources throughout the last years, the powerful technique of resonant inelastic X-ray scattering (RIXS) became feasible for a wide range of experiments within femtosecond dynamics in correlated materials and molecules.
In this thesis I investigate the potential to bring RIXS into the fluence regime of nonlinear X-ray-matter interactions, especially focusing on the impact of stimulated scattering on RIXS in transition metal systems in a transmission spectroscopy geometry around transition metal L-edges.
After presenting the RIXS toolbox and the capabilities of free electron laser light sources for ultrafast intense X-ray experiments, the thesis explores an experiment designed to understand the impact of stimulated scattering on diffraction and direct beam transmission spectroscopy on a CoPd multilayer system. The experiments require short X-ray pulses that can only be generated at free electron lasers (FEL). Here the pulses are not only short, but also very intense, which opens the door to nonlinear X-ray-matter interactions. In the second part of this thesis, we investigate observations in the nonlinear interaction regime, look at potential difficulties for classic spectroscopy and investigate possibilities to enhance the RIXS through stimulated scattering. Here, a study on stimulated RIXS is presented, where we investigate the light field intensity dependent CoPd demagnetization in transmission as well as scattering geometry. Thereby we show the first direct observation of stimulated RIXS as well as light field induced nonlinear effects,
namely the breakdown of scattering intensity and the increase in sample transmittance. The topic is of ongoing interest and will just increase in relevance as more free electron lasers are planned and the number of experiments at such light sources will continue to increase in the near future.
Finally we present a discussion on the accessibility of small DOS shifts in the absorption-band of transition metal complexes through stimulated resonant X-ray scattering. As these shifts occur for example in surface states this finding could expand the experimental selectivity of NEXAFS and RIXS to the detectability of surface states. We show how stimulation can indeed enhance the visibility of DOS shifts through the detection of stimulated spectral shifts and enhancements in this theoretical study. We also forecast the observation of stimulated enhancements in resonant excitation experiments at FEL sources in systems with a high density of states just below the Fermi edge and in systems with an occupied to unoccupied DOS ratio in the valence band above 1.
The lives of more than 1/6 th of the world population is directly affected by the caprices of the South Asian summer monsoon rainfall. India receives around 78 % of the annual precipitation during the June-September months, the summer monsoon season of South Asia. But, the monsoon circulation is not consistent throughout the entire summer season. Episodes of heavy rainfall (active periods) and low rainfall (break periods) are inherent to the intraseasonal variability of the South Asian summer monsoon. Extended breaks or long-lasting dryness can result in droughts and hence trigger crop failures and in turn famines. Furthermore, India's electricity generation from renewable sources (wind and hydro-power), which is increasingly important in order to satisfy the rapidly rising demand for energy, is highly reliant on the prevailing meteorology. The major drought years 2002 and 2009 for the Indian summer monsoon during the last decades, which are results of the occurrence of multiple extended breaks, emphasise exemplary that the understanding of the monsoon system and its intraseasonal variation is of greatest importance. Although, numerous studies based on observations, reanalysis data and global model simulations have been carried out with the focus on monsoon active and break phases over India, the understanding of the monsoon intraseasonal variability is only in the infancy stage. Regional climate models could benefit the comprehension of monsoon breaks by its resolution advantage.
This study investigates moist dynamical processes that initiate and maintain breaks during the South Asian summer monsoon using the atmospheric regional climate model HIRHAM5 at a horizontal resolution of 25 km forced by the ECMWF ERA Interim reanalysis for the period 1979-2012. By calculating moisture and moist static energy budgets the various competing mechanisms leading to extended breaks are quantitatively estimated. Advection of dry air from the deserts of western Asia towards central India is the dominant moist dynamical process in initiating extended break conditions over South Asia. Once initiated, the extended breaks are maintained due to many competing mechanisms: (i) the anomalous easterlies at the southern flank of this anticyclonic anomaly weaken the low-level cross-equatorial jet and thus the moisture transport into the monsoon region, (ii) differential radiative heating over the continental and the oceanic tropical convergence zone induces a local Hadley circulation with anomalous rising over the equatorial Indian Ocean and descent over central India, and (iii) a cyclonic response to positive rainfall anomalies over the near-equatorial Indian Ocean amplifies the anomalous easterlies over India and hence contributes to the low-level divergence over central India.
A sensitivity experiment that mimics a scenario of higher atmospheric aerosol concentrations over South Asia addresses a current issue of large uncertainty: the role aerosols play in suppressing monsoon rainfall and hence in triggering breaks. To study the indirect aerosol effects the cloud droplet number concentration was increased to imitate the aerosol's function as cloud condensation nuclei. The sensitivity experiment with altered microphysical cloud properties shows a reduction in the summer monsoon precipitation together with a weakening of the South Asian summer monsoon. Several physical mechanisms are proposed to be responsible for the suppressed monsoon rainfall: (i) according to the first indirect radiative forcing the increase in the number of cloud droplets causes an increase in the cloud reflectivity of solar radiation, leading to a climate cooling over India which in turn reduces the hydrological cycle, (ii) a stabilisation of the troposphere induced by a differential cooling between the surface and the upper troposphere over central India inhibits the growth of deep convective rain clouds, (iii) an increase of the amount of low and mid-level clouds together with a decrease in high-level cloud amount amplify the surface cooling and hence the atmospheric stability, and (iv) dynamical changes of the monsoon manifested as a anomalous anticyclonic circulation over India reduce the moisture transport into the monsoon region. The study suggests that the changes in the total precipitation, which are dominated by changes in the convective precipitation, mainly result from the indirect radiative forcing. Suppression of rainfall due to the direct microphysical effect is found to be negligible over India. Break statistics of the polluted cloud scenario indicate an increase in the occurrence of short breaks (3 days), while the frequency of extended breaks (> 7 days) is clearly not affected. This disproves the hypothesis that more and smaller cloud droplets, caused by a high load of atmospheric aerosols trigger long drought conditions over central India.
In der nichtlinearen Datenreihenanalyse hat sich seit etwa 10 Jahren eine Monte-Carlo-Testmethode etabliert, die Theiler-surrogatmethode, mit Hilfe derer entschieden werden kann, ob eine Datenreihe nichtlinearen Ursprungs sei. Diese Methode wird kritisiert, modifiziert und verallgemeinert. Das, was Theiler untersuchen will braucht andere Surrogatmethoden, die hier konstruiert werden. Und das, was Theiler untersucht braucht gar keine Monte-Carlo-Methoden. Mit Hilfe des in der Arbeit eingeführten Begriffs des Phasensignals werden Testmöglichkeiten dargelegt und Beziehungen zwischen den nichtlinearen Eigenschaften der Zeitreihe und deren Phasenspektrum erforscht. Das Phasensignal wird aus dem Phasenspektrum der Zeitreihe hergeleitet und registriert außerordentliche Geschehnisse im Zeitbereich sowie Phasenkopplungen im Frequenzbereich. Die gewonnenen Erkenntnisse werden auf das Problem der Polbewegung angewendet. Die Hypothese einer nichtlinearen Beziehung zwischen der atmosphärischen Erregung und der Polbewegung wird untersucht. Eine nichtlineare Behandlung wird nicht für nötig gehalten.
Die Fusion von Membranen ist ein entscheidender Prozeß bei der Entwicklung von Zellen im Körper. Beispielsweise ist sie eine der Voraussetzungen bei der Befruchtung einer Eizelle durch ein Spermium oder für das Eindringen von Viren in eine Zelle. Membranfusion ist auch notwendig für den Stofftransport in die Zelle hinein oder aus ihr heraus. Die Membranfusion ist daher auch von praktischen Interesse auf den Gebieten der Pharmazeutik und des 'Bioengineering'. Oft muss eine Membran mit der infiziertin Zelle fusionieren, um ein Medikament an sein Zeil zu bringen. Deshalb ist ein Verständnis der Membranfusion von großem Interesse für die Entwicklung von gezielten und effizienten Methoden des 'drug delivery'. Dasselbe gilt für die gezielte Zufuhr von Genen bei der Gentherapie. Obwohl die Membranfusion schon vor nahezu 200 Jahren von dem deutschen Biologen und Mediziner Johannes Müller beobachtet wurde, liegt ein vollständiges Verständnis des Fusionsprozesses von Zellen und (Modell-) Membranen auch heute noch in weiter Ferne. Allerdings hat im letzten Jahrzehnt das Interesse für dieses Forschungsgebiet stark zugenommen. Wissenschaftler der unterschiedlichsten Disziplinen arbeiten daran, die Mechanismen der Membranfusion aufzudecken. Biologen untersuchen Proteine, die die Fusion auslösen, Chemiker entwickeln Moleküle, die die Fusion erleichtern, und Physiker versuchen die Antriebsmechanismen der Membranfusion zu verstehen. Neue Mikroskopietechniken und die hohe Rechenleistung moderner Computer helfen die molekulare und die makroskopische Welt der Membranfusion in einem Bild zusammenzufügen. Für unsere Untersuchungen haben wir Modellmembranen, die aus Lipiddoppelschichten bestehen, benutzt. Diese Membranen formen sogenannte Vesikel oder Liposomen, abgeschlossene Membrane, in denen eine bestimmte Menge an Flüssigkeit enthalten ist. Indem wir Rezeptoren in die Membran einbringen, schaffen wir funkionalisierte Vesikel, die sich differenzieren, kooperieren und selektiv reagieren können. Wir benutzen positiv geladene wasserlösliche Ionen, um Wechselwirkungen zwischen den Vesikeln zu vermitteln, und lassen die Rezeptoren und die Ionen den Fusionsprozess auslösen. Die Wechselwirkungen werden unter dem Mikroskop durch spezielle Mikromechanischn Gerätz Mikromechinerien kontrolliert. Mit Hilfe einer sehr schnellen digitale Bildaufnahmetechnik ist es uns gelungen, die Fusion unserer Modellmembranen aufzunehmen und in Echtzeit zu dokumentieren mit einer Auflösung von 50 µs. Unsere Messungen können vergleichen werden mit Computersimulationen des Fusionsprozesses. Diese Simulationen untersuchen Prozesse, die zwischen 0.1 und 1 Mikrosekunde dauern. Eine Herausforderung für die Zukunft wird es sein, die Lücke zwischen den in Experimenten (50µs) und den in Simulationen zugänglichen Zeitskalen von beiden Seiten her zu schließen.
Diese Arbeit beschäftigt sich mit der Annahme, dass den Erdbeben ein selbstorganisiert kritischer Zustand der Erdkruste zugrunde liegt. Mit Hilfe einer Erweiterung bisheriger Modelle wird gezeigt, dass ein solcher Zustand nicht nur für die Grössenverteilung der Erdbeben (Gutenberg-Richter Gesetz), sondern auch für das beobachtete raumzeitliche Auftreten, z.B. für das Omori-Gesetz für Nachbebenserien, verantwortlich sein kann. Desweiteren wird die Frage nach der Vorhersagbarkeit grosser Erdbeben in solchen Modellsimulationen untersucht.
Thermal and quantum fluctuations of the electromagnetic near field of atoms and macroscopic bodies play a key role in quantum electrodynamics (QED), as in the Lamb shift. They lead, e.g., to atomic level shifts, dispersion interactions (Van der Waals-Casimir-Polder interactions), and state broadening (Purcell effect) because the field is subject to boundary conditions. Such effects can be observed with high precision on the mesoscopic scale which can be accessed in micro-electro-mechanical systems (MEMS) and solid-state-based magnetic microtraps for cold atoms (‘atom chips’). A quantum field theory of atoms (molecules) and photons is adapted to nonequilibrium situations. Atoms and photons are described as fully quantized while macroscopic bodies can be included in terms of classical reflection amplitudes, similar to the scattering approach of cavity QED. The formalism is applied to the study of nonequilibrium two-body potentials. We then investigate the impact of the material properties of metals on the electromagnetic surface noise, with applications to atomic trapping in atom-chip setups and quantum computing, and on the magnetic dipole contribution to the Van der Waals-Casimir-Polder potential in and out of thermal equilibrium. In both cases, the particular properties of superconductors are of high interest. Surface-mode contributions, which dominate the near-field fluctuations, are discussed in the context of the (partial) dynamic atomic dressing after a rapid change of a system parameter and in the Casimir interaction between two conducting plates, where nonequilibrium configurations can give rise to repulsion.
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
The biological function and the technological applications of semiflexible polymers, such as DNA, actin filaments and carbon nanotubes, strongly depend on their rigidity. Semiflexible polymers are characterized by their persistence length, the definition of which is the subject of the first part of this thesis. Attractive interactions, that arise e.g.~in the adsorption, the condensation and the bundling of filaments, can change the conformation of a semiflexible polymer. The conformation depends on the relative magnitude of the material parameters and can be influenced by them in a systematic manner. In particular, the morphologies of semiflexible polymer rings, such as circular nanotubes or DNA, which are adsorbed onto substrates with three types of structures, are studied: (i) A topographical channel, (ii) a chemically modified stripe and (iii) a periodic pattern of topographical steps. The results are compared with the condensation of rings by attractive interactions. Furthermore, the bundling of two individual actin filaments, whose ends are anchored, is analyzed. This system geometry is shown to provide a systematic and quantitative method to extract the magnitude of the attraction between the filaments from experimentally observable conformations of the filaments.
Passive plant actuators have fascinated many researchers in the field of botany and structural biology since at least one century. Up to date, the most investigated tissue types in plant and artificial passive actuators are fibre-reinforced composites (and multilayered assemblies thereof) where stiff, almost inextensible cellulose microfibrils direct the otherwise isotropic swelling of a matrix. In addition, Nature provides examples of actuating systems based on lignified, low-swelling, cellular solids enclosing a high-swelling cellulosic phase. This is the case of the Delosperma nakurense seed capsule, in which a specialized tissue promotes the reversible opening of the capsule upon wetting. This tissue has a diamond-shaped honeycomb microstructure characterized by high geometrical anisotropy: when the cellulosic phase swells inside this constraining structure, the tissue deforms up to four times in one principal direction while maintaining its original dimension in the other. Inspired by the example of the Delosoperma nakurense, in this thesis we analyze the role of architecture of 2D cellular solids as models for natural hygromorphs. To start off, we consider a simple fluid pressure acting in the cells and try to assess the influence of several architectural parameters onto their mechanical actuation. Since internal pressurization is a configurational type of load (that is the load direction is not fixed but it “follows” the structure as it deforms) it will result in the cellular structure acquiring a “spontaneous” shape. This shape is independent of the load but just depends on the architectural characteristics of the cells making up the structure itself. Whereas regular convex tiled cellular solids (such as hexagonal, triangular or square lattices) deform isotropically upon pressurization, we show through finite element simulations that by introducing anisotropic and non-convex, reentrant tiling large expansions can be achieved in each individual cell. The influence of geometrical anisotropy onto the expansion behaviour of a diamond shaped honeycomb is assessed by FEM calculations and a Born lattice approximation. We found that anisotropic expansions (eigenstrains) comparable to those observed in the keels tissue of the Delosoperma nakurense are possible. In particular these depend on the relative contributions of bending and stretching of the beams building up the honeycomb. Moreover, by varying the walls’ Young modulus E and internal pressure p we found that both the eigenstrains and 2D elastic moduli scale with the ratio p/E. Therefore the potential of these pressurized structures as soft actuators is outlined. This approach was extended by considering several 2D cellular solids based on two types of non-convex cells. Each honeycomb is build as a lattice made of only one non-convex cell. Compared to usual honeycombs, these lattices have kinked walls between neighbouring cells which offers a hidden length scale allowing large directed deformations. By comparing the area expansion in all lattices, we were able to show that less convex cells are prone to achieve larger area expansions, but the direction in which the material expands is variable and depends on the local cell’s connectivity. This has repercussions both at the macroscopic (lattice level) and microscopic (cells level) scales. At the macroscopic scale, these non-convex lattices can experience large anisotropic (similarly to the diamond shaped honeycomb) or perfectly isotropic principal expansions, large shearing deformations or a mixed behaviour. Moreover, lattices that at the macroscopic scale expand similarly can show quite different microscopic deformation patterns that include zig-zag motions and radical changes of the initial cell shape. Depending on the lattice architecture, the microscopic deformations of the individual cells can be equal or not, so that they can build up or mutually compensate and hence give rise to the aforementioned variety of macroscopic behaviours. Interestingly, simple geometrical arguments involving the undeformed cell shape and its local connectivity enable to predict the results of the FE simulations. Motivated by the results of the simulations, we also created experimental 3D printed models of such actuating structures. When swollen, the models undergo substantial deformation with deformation patterns qualitatively following those predicted by the simulations. This work highlights how the internal architecture of a swellable cellular solid can lead to complex shape changes which may be useful in the fields of soft robotics or morphing structures.
Observational and computational extragalactic astrophysics are two fields of research that study a similar subject from different perspectives. Observational extragalactic astrophysics aims, by recovering the spectral energy distribution of galaxies at different wavelengths, to reliably measure their properties at different cosmic times and in a large variety of environments. Analyzing the light collected by the instruments, observers try to disentangle the different processes occurring in galaxies at the scales of galactic physics, as well as the effect of larger scale processes such as mergers and accretion, in order to obtain a consistent picture of galaxy formation and evolution. On the other hand, hydrodynamical simulations of galaxy formation in cosmological context are able to follow the evolution of a galaxy along cosmic time, taking into account both external processes such as mergers, interactions and accretion, and internal mechanisms such as feedback from Supernovae and Active Galactic Nuclei. Due to the great advances in both fields of research, we have nowadays available spectral and photometric information for a large number of galaxies in the Universe at different cosmic times, which has in turn provided important knowledge about the evolution of the Universe; at the same time, we are able to realistically simulate galaxy formation and evolution in large volumes of the Universe, taking into account the most relevant physical processes occurring in galaxies.
As these two approaches are intrinsically different in their methodology and in the information they provide, the connection between simulations and observations is still not fully established, although simulations are often used in galaxies' studies to interpret observations and assess the effect of the different processes acting on galaxies on the observable properties, and simulators usually test the physical recipes implemented in their hydrodynamical codes through the comparison with observations. In this dissertation we aim to better connect the observational and computational approaches in the study of galaxy formation and evolution, using the methods and results of one field to test and validate the methods and results of the other.
In a first work we study the biases and systematics in the derivation of the galaxy properties in observations. We post-process hydrodynamical cosmological simulations of galaxy formation to calculate the galaxies' Spectral Energy Distributions (SEDs) using different approaches, including radiative transfer techniques. Comparing the direct results of the simulations with the quantities obtained applying observational techniques to these synthetic SEDs, we are able to make an analysis of the biases intrinsic in the observational algorithms, and quantify their accuracy in recovering the galaxies' properties, as well as estimating the uncertainties affecting a comparison between simulations and observations when different approaches to obtain the observables are followed. Our results show that for some quantities such as the stellar ages, metallicities and gas oxygen abundances large differences can appear, depending on the technique applied in the derivation.
In a second work we compare a set of fifteen galaxies similar in mass to the Milky Way and with a quiet merger history in the recent past (hence expected to have properties close to spiral galaxies), simulated in a cosmological context, with data from the Sloan Digital Sky Survey (SDSS). We use techniques to obtain the observables as similar as possible to the ones applied in SDSS, with the aim of making an unbiased comparison between our set of hydrodynamical simulations and SDSS observations. We quantify the differences in the physical properties when these are obtained directly from the simulations without post-processing, or mimicking the SDSS observational techniques. We fit linear relations between the values derived directly from the simulations and following SDSS observational procedures, which in most of the cases have relatively high correlation, that can be easily used to more reliably compare simulations with SDSS data. When mimicking SDSS techniques, these simulated galaxies are photometrically similar to galaxies in the SDSS blue sequence/green valley, but have in general older ages, lower SFRs and metallicities compared to the majority of the spirals in the observational dataset.
In a third work, we post-process hydrodynamical simulations of galaxies with radiative transfer techniques, to generate synthetic data that mimic the properties of the CALIFA Integral Field Spectroscopy (IFS) survey. We reproduce the main characteristics of the CALIFA observations in terms of field of view and spaxel physical size, data format, point spread functions and detector noise. This 3-dimensional dataset is suited to be analyzed by the same algorithms applied to the CALIFA dataset, and can be used as a tool to test the ability of the observational algorithms in recovering the properties of the CALIFA galaxies. To this purpose, we also generate the resolved maps of the simulations' properties, calculated directly from the hydrodynamical snapshots, or from the simulated spectra prior to the addition of the noise.
Our work shows that a reliable connection between the models and the data is of crucial importance both to judge the output of galaxy formation codes and to accurately test the observational algorithms used in the analysis of galaxy surveys' data. A correct interpretation of observations will be particularly important in the future, in light of the several ongoing and planned large galaxy surveys that will provide the community with large datasets of properties of galaxies (often spatially-resolved) at different cosmic times, allowing to study galaxy formation physics at a higher level of detail than ever before. We have shown that neglecting the observational biases in the comparison between simulations and an observational dataset may move the simulations to different regions in the planes of the observables, strongly affecting the assessment of the correctness of the sub-resolution physical models implemented in galaxy formation codes, as well as the interpretation of given observational results using simulations.
Late-type stars are by far the most frequent stars in the universe and of fundamental interest to various fields of astronomy – most notably to Galactic archaeology and exoplanet research. However, such stars barely change during their main sequence lifetime; their temperature, luminosity, or chemical composition evolve only very slowly over the course of billions of years. As such, it is difficult to obtain the age of such a star, especially when it is isolated and no other indications (like cluster association) can be used. Gyrochronology offers a way to overcome this problem.
Stars, just like all other objects in the universe, rotate and the rate at which stars rotate impacts many aspects of their appearance and evolution. Gyrochronology leverages the observed rotation rate of a late-type main sequence star and its systematic evolution to estimate their ages. Unlike the above-mentioned parameters, the rotation rate of a main sequence star changes drastically throughout its main sequence lifetime; stars spin down. The youngest stars rotate every few hours, whereas much older stars rotate only about once a month, or – in the case of some late M-stars – once in a hundred days. Given that this spindown is systematic (with an additional mass dependence), it gave rise to the idea of using the observed rotation rate of a star (and its mass or a suitable proxy thereof) to estimate a star’s age. This has been explored widely in young stellar open clusters but remains essentially unconstrained for stars older than the sun, and K and M stars older than 1 Gyr.
This thesis focuses on the continued exploration of the spindown behavior to assess, whether gyrochronology remains applicable for stars of old ages, whether it is universal for late-type main sequence stars (including field stars), and to provide calibration mileposts for spindown models. To accomplish this, I have analyzed data from Kepler space telescope for the open clusters Ruprecht 147 (2.7 Gyr old) and M 67 (4 Gyr). Time series photometry data (light curves)
were obtained for both clusters during Kepler’s K2 mission. However, due to technical limitations and telescope malfunctions, extracting usable data from the K2 mission to identify (especially long) rotation periods requires extensive data preparation.
For Ruprecht 147, I have compiled a list of about 300 cluster members from the literature and adopted preprocessed light curves from the Kepler archive where available. They have been cleaned of the gravest of data artifacts but still contained systematics. After correcting them for said artifacts, I was able to identify rotation periods in 31 of them.
For M 67 more effort was taken. My work on Ruprecht 147 has shown the limitations imposed by the preselection of Kepler targets. Therefore, I adopted the time series full frame image directly and performed photometry on a much higher spatial resolution to be able to obtain data for as many stars as possible. This also means that I had to deal with the ubiquitous artifacts in Kepler data. For that, I devised a method that correlates the artificial flux variations with the ongoing drift of the telescope pointing in order to remove it. This process was a large success and I was able to create light curves whose quality match and even exceede those that were created by the Kepler mission – all while operating on higher spatial resolution and processing fainter stars. Ultimately, I was able to identify signs of periodic variability in the (created) light curves for 31 and 47 stars in Ruprecht 147 and M 67, respectively. My data connect well to bluer stars of cluster of the same age and extend for the first time to stars redder than early-K and older than 1 Gyr. The cluster data show a clear flattening in the distribution of Ruprecht 147 and even a downturn for M 67, resulting in a somewhat sinusoidal shape. With that, I have shown that the systematic spindown of stars continues at least until 4 Gyr and stars continue to live on a single surface in age-rotation periods-mass space which allows gyrochronology to be used at least up to that age. However, the shape of the spindown – as exemplified by the newly discovered sinusoidal shape of the cluster sequence – deviates strongly from the expectations.
I then compiled an extensive sample of rotation data in open clusters – very much including my own work – and used the resulting cluster skeleton (with each cluster forming a rip in color-rotation period-mass space) to investigate if field stars follow the same spindown as cluster stars. For the field stars, I used wide binaries, which – with their shared origin and coevality – are in a sense the smallest possible open clusters. I devised an empirical method to evaluate the consistency between the rotation rates of the wide binary components and found that the vast majority of them are in fact consistent with what is observed in open clusters. This leads me to conclude that gyrochronology – calibrated on open clusters – can be applied to determine the ages of field stars.
Due to its relevance for global climate, the realistic representation of the Atlantic meridional overturning circulation (AMOC) in ocean models is a key task. In recent years, two paradigms have evolved around what are its driving mechanisms: diapycnal mixing and Southern Ocean winds. This work aims at clarifying what sets the strength of the Atlantic overturning components in an ocean general circulation model and discusses the role of spatially inhomogeneous mixing, numerical diffusion and winds. Furthermore, the relation of the AMOC with a key quantity, the meridional pressure difference is analyzed. Due to the application of a very low diffusive tracer advection scheme, a realistic Atlantic overturning circulation can be obtained that is purely wind driven. On top of the winddriven circulation, changes of density gradients are caused by increasing the parameterized eddy diffusion in the North Atlantic and Southern Ocean. The linear relation between the maximum of the Atlantic overturning and the meridional pressure difference found in previous studies is confirmed and it is shown to be due to one significant pressure gradient between the average pressure over high latitude deep water formation regions and a relatively uniform pressure between 30°N and 30°S, which can directly be related to a zonal flow through geostrophy. Under constant Southern Ocean windstress forcing, a South Atlantic outflow in the range of 6-16 Sv is obtained for a large variety of experiments. Overall, the circulation is winddriven but its strength not uniquely determined by the Southern Ocean windstress. The scaling of the Atlantic overturning components is linear with the background vertical diffusivity, not confirming the 2/3 power law for one-hemisphere models without wind forcing. The pycnocline depth is constant in the coarse resolution model with large vertical grid extends. It suggests the ocean model operates like the Stommel box model with a linear relation of the pressure difference and fixed vertical scale for the volume transport. However, this seems only valid for vertical diffusivities smaller 0.4 cm²/s, when the dominant upwelling within the Atlantic occurs along the boundaries. For larger vertical diffusivities, a significant amount of interior upwelling occurs. It is further shown that any localized vertical mixing in the deep to bottom ocean cannot drive an Atlantic overturning. However, enhanced boundary mixing at thermocline depths is potentially important. The numerical diffusion is shown to have a large impact on the representation of the Atlantic overturning in the model. While the horizontal numerical diffusion tends to destabilize the Atlantic overturning the verital numerical diffusion denotes an amplifying mechanism.
Supernovae are known to be the dominant energy source for driving turbulence in the interstellar medium. Yet, their effect on magnetic field amplification in spiral galaxies is still poorly understood. Analytical models based on the uncorrelated-ensemble approach predicted that any created field will be expelled from the disk before a significant amplification can occur. By means of direct simulations of supernova-driven turbulence, we demonstrate that this is not the case. Accounting for vertical stratification and galactic differential rotation, we find an exponential amplification of the mean field on timescales of 100Myr. The self-consistent numerical verification of such a “fast dynamo” is highly beneficial in explaining the observed strong magnetic fields in young galaxies. We, furthermore, highlight the importance of rotation in the generation of helicity by showing that a similar mechanism based on Cartesian shear does not lead to a sustained amplification of the mean magnetic field. This finding impressively confirms the classical picture of a dynamo based on cyclonic turbulence.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
Organic-inorganic hybrids based on P3HT and mesoporous silicon for thermoelectric applications
(2024)
This thesis presents a comprehensive study on synthesis, structure and thermoelectric transport properties of organic-inorganic hybrids based on P3HT and porous silicon. The effect of embedding polymer in silicon pores on the electrical and thermal transport is studied. Morphological studies confirm successful polymer infiltration and diffusion doping with roughly 50% of the pore space occupied by conjugated polymer. Synchrotron diffraction experiments reveal no specific ordering of the polymer inside the pores. P3HT-pSi hybrids show improved electrical transport by five orders of magnitude compared to porous silicon and power factor values comparable or exceeding other P3HT-inorganic hybrids. The analysis suggests different transport mechanisms in both materials. In pSi, the transport mechanism relates to a Meyer-Neldel compansation rule. The analysis of hybrids' data using the power law in Kang-Snyder model suggests that a doped polymer mainly provides charge carriers to the pSi matrix, similar to the behavior of a doped semiconductor. Heavily suppressed thermal transport in porous silicon is treated with a modified Landauer/Lundstrom model and effective medium theories, which reveal that pSi agrees well with the Kirkpatrick model with a 68% percolation threshold. Thermal conductivities of hybrids show an increase compared to the empty pSi but the overall thermoelectric figure of merit ZT of P3HT-pSi hybrid exceeds both pSi and P3HT as well as bulk Si.
Oscillatory systems under weak coupling can be described by the Kuramoto model of phase oscillators. Kuramoto phase oscillators have diverse applications ranging from phenomena such as communication between neurons and collective influences of political opinions, to engineered systems such as Josephson Junctions and synchronized electric power grids. This thesis includes the author's contribution to the theoretical framework of coupled Kuramoto oscillators and to the understanding of non-trivial N-body dynamical systems via their reduced mean-field dynamics.
The main content of this thesis is composed of four parts. First, a partially integrable theory of globally coupled identical Kuramoto oscillators is extended to include pure higher-mode coupling. The extended theory is then applied to a non-trivial higher-mode coupled model, which has been found to exhibit asymmetric clustering. Using the developed theory, we could predict a number of features of the asymmetric clustering with only information of the initial state provided.
The second part consists of an iterated discrete-map approach to simulate phase dynamics. The proposed map --- a Moebius map --- not only provides fast computation of phase synchronization, it also precisely reflects the underlying group structure of the dynamics. We then compare the iterated-map dynamics and various analogous continuous-time dynamics. We are able to replicate known phenomena such as the synchronization transition of the Kuramoto-Sakaguchi model of oscillators with distributed natural frequencies, and chimera states for identical oscillators under non-local coupling.
The third part entails a particular model of repulsively coupled identical Kuramoto-Sakaguchi oscillators under common random forcing, which can be shown to be partially integrable. Via both numerical simulations and theoretical analysis, we determine that such a model cannot exhibit stationary multi-cluster states, contrary to the numerical findings in previous literature. Through further investigation, we find that the multi-clustering states reported previously occur due to the accumulation of discretization errors inherent in the integration algorithms, which introduce higher-mode couplings into the model. As a result, the partial integrability condition is violated.
Lastly, we derive the microscopic cross-correlation of globally coupled non-identical Kuramoto oscillators under common fluctuating forcing. The effect of correlation arises naturally in finite populations, due to the non-trivial fluctuations of the meanfield. In an idealized model, we approximate the finite-sized fluctuation by a Gaussian white noise. The analytical approximation qualitatively matches the measurements in numerical experiments, however, due to other periodic components inherent in the fluctuations of the mean-field there still exist significant inconsistencies.
The cell interior is a highly packed environment in which biological macromolecules evolve and function. This crowded media has effects in many biological processes such as protein-protein binding, gene regulation, and protein folding. Thus, biochemical reactions that take place in such crowded conditions differ from diluted test tube conditions, and a considerable effort has been invested in order to understand such differences.
In this work, we combine different computationally tools to disentangle the effects of molecular crowding on biochemical processes. First, we propose a lattice model to study the implications of molecular crowding on enzymatic reactions. We provide a detailed picture of how crowding affects binding and unbinding events and how the separate effects of crowding on binding equilibrium act together. Then, we implement a lattice model to study the effects of molecular crowding on facilitated diffusion. We find that obstacles on the DNA impair facilitated diffusion. However, the extent of this effect depends on how dynamic obstacles are on the DNA. For the scenario in which crowders are only present in the bulk solution, we find that at some conditions presence of crowding agents can enhance specific-DNA binding. Finally, we make use of structure-based techniques to look at the impact of the presence of crowders on the folding a protein. We find that polymeric crowders have stronger effects on protein stability than spherical crowders. The strength of this effect increases as the polymeric crowders become longer. The methods we propose here are general and can also be applied to more complicated systems.
During this work I built a four wave mixing setup for the time-resolved femtosecond spectroscopy of Raman-active lattice modes. This setup enables to study the selective excitation of phonon polaritons. These quasi-particles arise from the coupling of electro-magnetic waves and transverse optical lattice modes, the so-called phonons. The phonon polaritons were investigated in the optically non-linear, ferroelectric crystals LiNbO₃ and LiTaO₃.
The direct observation of the frequency shift of the scattered narrow bandwidth probe pulses proofs the role of the Raman interaction during the probe and excitation process of phonon polaritons. I compare this experimental method with the measurement where ultra-short laser pulses are used. The frequency shift remains obscured by the relative broad bandwidth of these laser pulses. In an experiment with narrow bandwidth probe pulses, the Stokes and anti-Stokes intensities are spectrally separated. They are assigned to the corresponding counter-propagating wavepackets of phonon polaritons. Thus, the dynamics of these wavepackets was separately studied. Based on these findings, I develop the mathematical description of the so-called homodyne detection of light for the case of light scattering from counter propagating phonon polaritons.
Further, I modified the broad bandwidth of the ultra-short pump pulses using bandpass filters to generate two pump pulses with non-overlapping spectra. This enables the frequency-selective excitation of polariton modes in the sample, which allows me to observe even very weak polariton modes in LiNbO₃ or LiTaO₃ that belong to the higher branches of the dispersion relation of phonon polaritons. The experimentally determined dispersion relation of the phonon polaritons could therefore be extended and compared to theoretical models. In addition, I determined the frequency-dependent damping of phonon polaritons.
In the present dissertation paper we study problems related to synchronization phenomena in the presence of noise which unavoidably appears in real systems. One part of the work is aimed at investigation of utilizing delayed feedback to control properties of diverse chaotic dynamic and stochastic systems, with emphasis on the ones determining predisposition to synchronization. Other part deals with a constructive role of noise, i.e. its ability to synchronize identical self-sustained oscillators. First, we demonstrate that the coherence of a noisy or chaotic self-sustained oscillator can be efficiently controlled by the delayed feedback. We develop the analytical theory of this effect, considering noisy systems in the Gaussian approximation. Possible applications of the effect for the synchronization control are also discussed. Second, we consider synchrony of limit cycle systems (in other words, self-sustained oscillators) driven by identical noise. For weak noise and smooth systems we proof the purely synchronizing effect of noise. For slightly different oscillators and/or slightly nonidentical driving, synchrony becomes imperfect, and this subject is also studied. Then, with numerics we show moderate noise to be able to lead to desynchronization of some systems under certain circumstances. For neurons the last effect means “antireliability” (the “reliability” property of neurons is treated to be important from the viewpoint of information transmission functions), and we extend our investigation to neural oscillators which are not always limit cycle ones. Third, we develop a weakly nonlinear theory of the Kuramoto transition (a transition to collective synchrony) in an ensemble of globally coupled oscillators in presence of additional time-delayed coupling terms. We show that a linear delayed feedback not only controls the transition point, but effectively changes the nonlinear terms near the transition. A purely nonlinear delayed coupling does not affect the transition point, but can reduce or enhance the amplitude of collective oscillations.
Atmospheric circulation and the surface mass balance in a regional climate model of Antarctica
(2007)
Understanding the Earth's climate system and particularly climate variability presents one of the most difficult and urgent challenges in science. The Antarctic plays a crucial role in the global climate system, since it is the principal region of radiative energy deficit and atmospheric cooling. An assessment of regional climate model HIRHAM is presented. The simulations are generated with the HIRHAM model, which is modified for Antarctic applications. With a horizontal resolution of 55km, the model has been run for the period 1958-1998 creating long-term simulations from initial and boundary conditions provided by the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA40 re-analysis. The model output is compared with observations from observation stations, upper air data, global atmospheric analyses and satellite data. In comparison with the observations, the evaluation shows that the simulations with the HIRHAM model capture both the large and regional scale circulation features with generally small bias in the modeled variables. On the annual time scale the largest errors in the model simulations are the overestimation total cloud cover and the colder near-surface temperature over the interior of the Antarctic plateau. The low-level temperature inversion as well as low-level wind jet is well captured by the model. The decadal scale processes were studied based on trend calculations. The long-term run was divided into two 20 years parts. The 2m temperature, 500 hPa temperature, MSLP, precipitation and net mass balance trends were calculated for both periods and over 1958 - 1998. During the last two decades the strong surface cooling was observed over the Eastern Antarctica, this result is in good agreement with the result of Chapman and Walsh (2005) who calculated the temperature trend based on the observational data. The MSLP trend reveals a big disparity between the first and second parts of the 40 year run. The overall trend shows the strengthening of the circumpolar vortex and continental anticyclone. The net mass balance as well as precipitation show a positive trend over the Antarctic Peninsula region, along Wilkes Land and in Dronning Maud Land. The Antarctic ice sheet grows over the Eastern part of Antarctica with small exceptions in Dronning Maud Land and Wilkes Land and sinks in the Antarctic Peninsula; this result is in good agreement with the satellite-measured altitude presented in Davis (2005) . To better understand the horizontal structure of MSLP, temperature and net mass balance trends the influence of the Southern Annual Mode (SAM) on the Antarctic climate was investigated. The main meteorological parameters during the positive and negative Antarctic Oscillation (AAO) phases were compared to each other. A positive/negative AAO index means strengthening/weakening of the circumpolar vortex, poleward/northward storm tracks and prevailing/weakening westerly winds. For detailed investigation of global teleconnection, two positive and one negative periods of AAO phase were chosen. The differences in MSLP and 2m temperature between positive and negative AAO years during the winter months partly explain the surface cooling during the last decades.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.
In this thesis the gravitational lensing effect is used to explore a number of cosmological topics. We determine the time delay in the gravitationally lensed quasar system HE1104-1805 using different techniques. We obtain a time delay Delta_t(A-B) Delta_t(A-B) =-310 +- 20 days (2 sigma errors) between the two components. We also study the double quasar Q0957+561 during a three years monitoring campaign. The fluctuations we find in the difference light curves are completely consistent with noise and no microlensing is needed to explain these fluctuations. Microlensing is also studied in the quadruple quasar Q2237+0305 during the GLITP collaboration (Oct.1999-Feb.2000). We use the absence of a strong microlensing signal to obtain an upper limit of v=600 km/s for the effective transverse velocity of the lens galaxy (considering microlenses with 0.1 solar masses). The distribution of dark matter in galaxy clusters is also studied in the second part of the thesis. In the cluster of galaxies Cl0024+1654 we obtain a mass-to-light ratio of M/L = 200 M_sun/L_sun (within a radius of 3 arcminutes). In the galaxy cluster RBS380 we find a relatively low X-ray luminosity for a massive cluster of L =2*10^(44) erg/s, but a rich distribution of galaxies in the optical band.
Box-Simulationen von rotierender Magnetokonvektion im flüssigen Erdkern Numerische Simulationen der 3D-MHD Gleichungen sind mit Hilfe des Codes NIRVANA durchgeführt worden. Die Gleichungen für kompressible rotierende Magnetokonvektion wurden für erdähnliche Bedingungen numerisch in einer kartesischen Box gelöst. Charakteristische Eigenschaften mittlerer Größen, wie der Turbulenz-Intensität oder der turbulente Wärmefluss, die durch die kombinierte Wirkung kleinskaliger Fluktuationen entstehen, wurden bestimmt. Die Korrelationslänge der Turbulenz hängt signifikant von der Stärke und der Orientierung des Magnetfeldes ab, und das anisotrope Verhalten der Turbulenz aufgrund von Coriolis- und Lorentzkraft ist für schnellere Rotation wesentlich stärker ausgeprägt. Die Ausbildung eines isotropen Verhaltens auf kleinen Skalen unter dem Einfluss von Rotation alleine wird bereits durch ein schwaches Magnetfeld verhindert. Dies resultiert in einer turbulenten Strömung, die durch die vertikale Komponente dominiert wird. In Gegenwart eines horizontalen Magnetfeldes nimmt der vertikale turbulente Wärmefluss leicht mit zunehmender Feldstärke zu, so dass die Kühlung eines rotierenden Systems verbessert wird. Der horizontale Wärmetransport ist stets westwärts und in Richtung der Pole orientiert. Letzteres kann unter Umständen die Quelle für eine großskalige meridionale Strömung darstellen, während erstes in globalen Simulationen mit nicht axialsymmetrischen Randbedingungen für den Wärmefluss von Bedeutung ist. Die mittlere elektromotorische Kraft, die die Erzeugung von magnetischem Fluss durch die Turbulenz beschreibt, wurde unmittelbar aus den Lösungen für Geschwindigkeit und Magnetfeld berechnet. Hieraus konnten die entsprechenden α-Koeffizienten hergeleitet werden. Aufgrund der sehr schwachen Dichtestratifizierung ändert der α-Effekt sein Vorzeichen nahezu exakt in der Mitte der Box. Der α-Effekt ist positiv in der oberen Hälfte und negativ in der unteren Hälfte einer auf der Nordhalbkugel rotierenden Box. Für ein starkes Magnetfeld ergibt sich zudem eine deutliche abwärts orientierte Advektion von magnetischem Fluss. Ein Mean-Field Modell des Geodynamos wurde konstruiert, das auf dem α-Effekt basiert, wie er aus den Box-Simulationen berechnet wurde. Für eine äußerst beschränkte Klasse von radialen α-Profilen weist das lineare α^2-Modell Oszillationen auf einer Zeitskala auf, die durch die turbulente Diffusionszeit bestimmt wird. Die wesentlichen Eigenschaften der periodischen Lösungen werden präsentiert, und der Einfluss der Größe des inneren Kerns auf die Charakteristiken des kritischen Bereichs, innerhalb dessen oszillierende Lösungen auftreten, wurden untersucht. Reversals werden als eine halbe Oszillation interpretiert. Sie sind ein recht seltenes Ereignis, da sie lediglich dann stattfinden können, wenn das α-Profil ausreichend lange in dem periodische Lösungen erlaubenden Bereich liegt. Aufgrund starker Fluktuationen auf der konvektiven Zeitskala ist die Wahrscheinlichkeit eines solchen Reversals relativ klein. In einem einfachen nicht-linearen Mean-Field Modell mit realistischen Eingabeparametern, die auf den Box-Simulationen beruhen, konnte die Plausibilität des Reversal-Modells anhand von Langzeitsimulationen belegt werden.
Die Klangeigenschaften von Musikinstrumenten werden durch das Zusammenwirken der auf ihnen anregbaren akustischen Schwingungsmoden bestimmt, welche sich wiederum aus der geometrischen Struktur des Resonators in Kombination mit den verwendeten Materialien ergeben. In dieser Arbeit wurde das Schwingungsverhalten von Streichinstrumenten durch den Einsatz minimal-invasiver piezoelektrischer Polymerfilmsensoren untersucht. Die studierten Kopplungsphänomene umfassen den sogenannten Wolfton und Schwingungstilger, die zu dessen Abschwächung verwendet werden, sowie die gegenseitige Beeinflussung von Bogen und Instrument beim Spielvorgang. An Dielektrischen Elastomeraktormembranen wurde dagegen der Einfluss der elastischen Eigenschaften des Membranmaterials auf das akustische und elektromechanische Schwingungsverhalten gezeigt. Die Dissertation gliedert sich in drei Teile, deren wesentliche Ergebnisse im Folgenden zusammengefasst werden.
In Teil I wurde die Funktionsweise eines abstimmbaren Schwingungstilgers zur Dämpfung von Wolftönen auf Streichinstrumenten untersucht. Durch Abstimmung der Resonanzfrequenz des Schwingungstilgers auf die Wolftonfrequenz kann ein Teil der Saitenschwingungen absorbiert werden, so dass die zu starke Anregung der Korpusresonanz vermieden wird, die den Wolfton verursacht. Der Schwingungstilger besteht aus einem „Wolftöter“, einem Massestück, welches auf der Nachlänge der betroffenen Saite (zwischen Steg und Saitenhalter) installiert wird. Hier wurde gezeigt, wie die Resonanzen dieses Schwingungstilgers von der Masse des Wolftöters und von dessen Position auf der Nachlänge abhängen. Aber auch die Geometrie des Wolftöters stellte sich als ausschlaggebend heraus, insbesondere bei einem nicht-rotationssymmetrischen Wolftöter: In diesem Fall entsteht – basierend auf den zu erwartenden nicht-harmonischen Moden einer massebelasteten Saite – eine zusätzliche Mode, die von der Polarisationsrichtung der Saitenschwingung abhängt.
Teil II der Dissertation befasst sich mit Elastomermembranen, die als Basis von Dielektrischen Elastomeraktoren dienen, und die wegen der Membranspannung auch akustische Resonanzen aufweisen. Die Ansprache von Elastomeraktoren hängt unter anderem von der Geschwindigkeit der elektrischen Anregung ab. Die damit zusammenhängenden viskoelastischen Eigenschaften der hier verwendeten Elastomere, Silikon und Acrylat, wurden einerseits in einer frequenzabhängigen dynamisch-mechanischen Analyse des Elastomers erfasst, andererseits auch optisch an vollständigen Aktoren selbst gemessen. Die höhere Viskosität des Acrylats, das bei tieferen Frequenzen höhere Aktuationsdehnungen als das Silikon zeigt, führt zu einer Verminderung der Dehnungen bei höheren Frequenzen, so dass über etwa 40 Hertz mit Silikon größere Aktuationsdehnungen erreicht werden. Mit den untersuchten Aktoren konnte die Gitterkonstante weicher optischer Beugungsgitter kontrolliert werden, die als zusätzlicher Film auf der Membran installiert wurden. Über eine Messung der akustischen Resonanzfrequenz von Elastomermebranen aus Acrylat in 1Abhängigkeit von ihrer Vorstreckung konnte in Verbindung mit einer Modellierung des hyperelastischen Verhaltens des Elastomers (Ogden-Modell) der Schermodul bestimmt werden.
Schließlich wird in Teil III die Untersuchung von Geigen und ihrer Streichanregung mit Hilfe minimal-invasiver piezoelektrischer Polymerfilme geschildert. Es konnten am Bogen und am Steg von Geigen – unter den beiden Füßen des Stegs – jeweils zwei Filmsensoren installiert werden. Mit den beiden Sensoren am Steg wurden Frequenzgänge von Geigen gemessen, welche eine Bestimmung der frequenzabhängigen Stegbewegung erlaubten. Diese Methode ermöglicht damit auch eine umfassende Charakterisierung der Signaturmoden in Bezug auf die Stegdynamik. Die Ergebnisse der komplementären Methoden von Impulsanregung und natürlichem Spielen der Geigen konnten dank der Sensoren verglichen werden. Für die Nutzung der Sensoren am Bogen – insbesondere für eine Messung des Bogendrucks – wurde eine Kalibrierung des Bogen-Sensor-Systems mit Hilfe einer Materialprüfmaschine durchgeführt. Bei einer Messung während des natürlichen Spielens wurde mit den Sensoren am Bogen einerseits die Übertragung der Saitenschwingung auf den Bogen festgestellt. Dabei konnten außerdem longitudinale Bogenhaarresonanzen identifiziert werden, die von der Position der Saite auf dem Bogen abhängen. Aus der Analyse dieses Phänomens konnte die longitudinale Wellengeschwindigkeit der Bogenhaare bestimmt werden, die eine wichtige Größe für die Kopplung zwischen Saite und Bogen ist. Mit Hilfe des Systems aus Sensoren an Bogen und Steg werden auf Grundlage der vorliegenden Arbeit Studien an Streichinstrumenten vorgeschlagen, in denen die Bespielbarkeit der Instrumente zu den jeweils angeregten Steg- und Bogenschwingungen in Beziehung gesetzt werden kann. Damit könnte nicht zuletzt auch die bisher nicht vollständig geklärte Rolle des Bogens für Klang und Bespielbarkeit besser beurteilt werden
This thesis describes two main projects; the first one is the optimization of a hierarchical search strategy to search for unknown pulsars. This project is divided into two parts; the first part (and the main part) is the semi-coherent hierarchical optimization strategy. The second part is a coherent hierarchical optimization strategy which can be used in a project like Einstein@Home. In both strategies we have found that the 3-stages search is the optimum strategy to search for unknown pulsars. For the second project we have developed a computer software for a coherent Multi-IFO (Interferometer Observatory) search. To validate our software, we have worked on simulated data as well as hardware injected signals of pulsars in the fourth LIGO science run (S4). While with the current sensitivity of our detectors we do not expect to detect any true Gravitational Wave signals in our data, we can still set upper limits on the strength of the gravitational waves signals. These upper limits, in fact, tell us how weak a signal strength we would detect. We have also used our software to set upper limits on the signal strength of known isolated pulsars using LIGO fifth science run (S5) data.
Nucleation and growth of unsubstituted metal phthalocyanine films from solution on planar substrates
(2012)
In den vergangenen Jahren wurden kosteneffiziente nasschemische Beschichtungsverfahren für die Herstellung organischer Dünnfilme für verschiedene opto-elektronische Anwendungen entdeckt und weiterentwickelt. Unter anderem wurden Phthalocyanin-Moleküle in photoaktiven Schichten für die Herstellung von Solarzellen intensiv erforscht. Aufgrund der kleinen bzw. unbekannten Löslichkeit wurden Phthalocyanin-Schichten durch Aufdampfverfahren im Vakuum hergestellt. Des Weiteren wurde die Löslichkeit durch chemische Synthese erhöht, was aber die Eigenschaften von Pc beeinträchtigte. In dieser Arbeit wurde die Löslichkeit, optische Absorption und Stabilität von 8 verschiedenen unsubstituierten Metall-Phthalocyaninen in 28 verschiedenen Lösungsmitteln quantitativ gemessen. Wegen ausreichender Löslichkeit, Stabilität und Anwendbarkeit in organischen Solarzellen wurde Kupferphthalocyanin (CuPc) in Trifluoressigsäure (TFA) für weitere Untersuchungen ausgewählt. Durch die Rotationsbeschichtung von CuPc aus TFA Lösung wurde ein dünner Film aus der verdampfenden Lösung auf dem Substrat platziert. Nach dem Verdampfen des Lösungsmittels, die Nanobändern aus CuPc bedecken das Substrat. Die Nanobänder haben eine Dicke von etwa ~ 1 nm (typische Dimension eines CuPc-Molekül) und variierender Breite und Länge, je nach Menge des Materials. Solche Nanobändern können durch Rotationsbeschichtung oder auch durch andere Nassbeschichtungsverfahren, wie Tauchbeschichtung, erzeugt werden. Ähnliche Fibrillen-Strukturen entstehen durch Nassbeschichtung von anderen Metall-Phthalocyaninen, wie Eisen- und Magnesium-Phthalocyanin, aus TFA-Lösung sowie auf anderen Substraten, wie Glas oder Indium Zinnoxid. Materialeigenschaften von aufgebrachten CuPc aus TFA Lösung und CuPc in der Lösung wurden ausführlich mit Röntgenbeugung, Spektroskopie- und Mikroskopie Methoden untersucht. Es wird gezeigt, dass die Nanobänder nicht in der Lösung, sondern durch Verdampfen des Lösungsmittels und der Übersättigung der Lösung entstehen. Die Rasterkraftmikroskopie wurde dazu verwendet, um die Morphologie des getrockneten Films bei unterschiedlicher Konzentration zu studieren. Der Mechanismus der Entstehung der Nanobändern wurde im Detail studiert. Gemäß der Keimbildung und Wachstumstheorie wurde die Entstehung der CuPc Nanobänder aus einer übersättigt Lösung diskutiert. Die Form der Nanobändern wurde unter Berücksichtigung der Wechselwirkung zwischen den Molekülen und dem Substrat diskutiert. Die nassverarbeitete CuPc-Dünnschicht wurde als Donorschicht in organischen Doppelschicht Solarzellen mit C60-Molekül, als Akzeptor eingesetzt. Die Effizienz der Energieumwandlung einer solchen Zelle wurde entsprechend den Schichtdicken der CuPc Schicht untersucht.
Proteins of halophilic organisms that accumulate molar concentrations of KCl in their cytoplasm have much higher content in acidic amino acids than proteins of mesophilic organisms. It has been proposed that this excess is necessary to maintain proteins hydrated in an environment with low water activity: either via direct interactions between water and the carboxylate groups of acidic amino acids or via cooperative interactions between acidic amino acids and hydrated cations, which would stabilize the folded protein. In the course of this Ph.D. study, we investigated these possibilities using atomistic molecular dynamics simulations and classical force fields. High quality parameters describing the interaction between K+ and carboxylate groups present in acidic amino acids are indispensable for this study. We first evaluated the quality of the default parameters for these ions within the widely used AMBER ff14SB force field for proteins and found that they perform poorly. We propose new parameters, which reproduce solution activity derivatives of potassium acetate solutions up to 2 mol/kg and the distances between potassium ions and carboxylate groups observed in x-ray structures of proteins. To understand the role of acidic amino acids in protein hydration, we investigated this aspect for 5 halophilic proteins in comparison with 5 mesophilic ones. Our results do not support the necessity of acidic amino acids to keep folded proteins hydrated. Proteins with a larger fraction of acidic amino acids indeed have higher hydration levels. However, the hydration level of each protein is identical at low (b_KCl = 0.15 mol/kg) and high (b_KCl = 2 mol/kg) KCl concentration. It has also been proposed that cooperative interactions between acidic amino acids with nearby hydrated cations stabilize the folded protein and slow down its solvation shell; according to this theory, the cations would be preferentially excluded from the unfolded structure. We investigate this possibility through extensive free energy calculation simulations. We find that cooperative interactions between neighboring acidic amino acids exist and are mediated by the ions in solution but are present in both folded and unfolded structures of halophilic proteins. The translational dynamics of the solvation shell is barely distinguishable between halophilic and mesophilic proteins; therefore, such a cooperative effect does not result in unusually slow solvent dynamics as has been suggested.
Selfsustained oscillations are some of the most commonly observed phenomena in biological systems. They emanate from non-linear systems in a heterogeneous environment and can be described by the theory of dynamical systems. Part of this theory considers reduced models of the oscillator dynamics by means of amplitudes and a phase variable. Such variables are highly attractive for theoretical and experimental studies. Theoretically these variables correspond to an integrable linearization of the generally non-linear system. Experimentally, there exist well established approaches to extract phases from oscillator signals. Notably, one can define phase models also for networks of oscillators. One highly active field examines effects of non-local coupling among oscillators, which is thought to play a key role in networks with strong coupling. The dissertation introduces and expands the knowledge about high-order phase coupling in networks of oscillators. Mathematical calculations consider the Stuart-Landau oscillator. A novel phase estimation scheme for direct observations of an oscillator dynamics is introduced based on numerics. A numerical study of high-order phase coupling applies a Fourier fit for the Stuart-Landau and for the van-der-Pol oscillator. The numerical approach is finally tested on observation-based phase estimates of the Morris-Lecar neuron. A popular approach for the construction of phases from signals is based on phase demodulation by means of the Hilbert transform. Generally, observations of oscillations contain a small and generic variation of their amplitude. The work presents a way to quantify how much the variations of signal amplitude spoil a phase demodulation procedure. For the ideal case of phase modulated signals, amplitude modulations vanish. However, the Hilbert transform produces artificial variations of the reconstructed amplitude even in this case. The work proposes a novel procedure called Iterative Hilbert Transform Embedding to obtain an optimal demodulation of signals. The text presents numerous examples and tests of application for the method, covering multicomponent signals, observables of highly stable limit cycle oscillations and noisy phase dynamics. The numerical results are supported by a spectral theory of convergence for weak phase modulations.
Die Frage nach der Herkunft und der dynamischen Entwicklung langlebiger kosmischer Magnetfelder ist in vielen Details noch unbeantwortet. Es besteht zwar kein Zweifel daran, dass das Magnetfeld der Erde und anderer kosmischer Objekte durch den sogenannten Dynamoeffekt verursacht werden, der genaue Mechanismus als auch die notwendigen Voraussetzungen und Randbedingungen der zugrundeliegenden Strömungen sind aber weitgehend unbekannt. Die für einen Dynamo interessanten Strömungsmuster, die im Inneren von Himmelskörpern durch Konvektion und differentielle Rotation entstehen, sind Konvektionsrollen parallel zur Rotationsachse. Auf einer Strömung mit eben solcher Geometrie, der sogenannten Roberts-Strömung, basieren die in der vorliegenden Arbeit untersuchten Dynamomodelle. Mit Methoden der nichtlinearen Dynamik wird versucht, das Systemverhalten bei Änderung der Systemparamter genauer zu charakterisieren. Die numerischen Untersuchungen beginnen mit einer Analyse der Dynamoaktivität der Roberts-Strömung in Abhängigkeit von den zwei freien Parametern in den Modellgleichungen, der magnetischen Prandtl-Zahl und der Stärke des Energieinputs. Gefunden werden verschiedene Lösungstypen die von einem stationären Magnetfeld über periodische bis zu chaotischen Zuständen reichen. Die yugrundeliegenden Symmetrien werden beschrieben und die Bifurkationen, die zum Wechsel der Lösungstypen führen, charakterisiert. Zusätzlich gibt es Bereiche bei sehr kleinen Prandtl-Zahlen, in denen überhaupt kein Dynamo existiert. Dieses Verhalten wird in der Literatur auch für viele andere numerisch ausgewertete Modelle beschrieben. Im Übergangsbereich zwischen dynamoaktivem und dynamoinaktivem Bereich wird das Auftreten einer sogenannten Blowout-Bifurkation gefunden. Desweiteren beschäftigt sich die Arbeit mit der Frage, inwiefern Helizität, also eine schraubenförmige Bewegung, der Strömung den Dynamoeffekt beeinflusst. Dazu werden ähnliche Strömungstypen verglichen, die sich hauptsächlich in ihrem Helizitätswert unterscheiden. Es wird gefunden, dass ein bestimmter Wert der Helizität nicht unterschritten werden darf, um einen stabilen Roberts-Dynamo zu erhalten.
Unter geeigneten Wachstumsbedingungen weisen Algenkulturen oft eine größere Produktivität der Zellen auf, als sie bei höheren Pflanzen zu beobachten ist. Chlamydomonas reinhardtii-Zellen sind vergleichsweise klein. So beträgt das Zellvolumen während des vegetativen Zellzyklus etwa 50–3500 µm³. Im Vergleich zu höheren Pflanzen ist in einer Algensuspension die Konzentration der Biomasse allerdings gering. So enthält beispielsweise 1 ml einer üblichen Konzentration zwischen 10E6 und 10E7 Algenzellen. Quantifizierungen von Metaboliten oder Makromolekülen, die zur Modellierung von zellulären Prozessen genutzt werden, werden meist im Zellensemble vorgenommen. Tatsächlich unterliegt jedoch jede Algenzelle einer individuellen Entwicklung, die die Identifizierung charakteristischer allgemeingültiger Systemparameter erschwert. Ziel dieser Arbeit war es, biochemisch relevante Messgrößen in-vivo und in-vitro mit Hilfe optischer Verfahren zu identifizieren und zu quantifizieren. Im ersten Teil der Arbeit wurde ein Puls-Amplituden-Modulation(PAM)-Fluorimetriemessplatz zur Messung der durch äußere Einflüsse bedingten veränderlichen Chlorophyllfluoreszenz an einzelnen Zellen vorgestellt. Die Verwendung eines kommerziellen Mikroskops, die Implementierung empfindlicher Nachweiselektronik und einer geeignete Immobilisierungsmethode ermöglichten es, ein Signal-zu-Rauschverhältnis zu erreichen, mit dem Fluoreszenzsignale einzelner lebender Chlamydomonas-Zellen gemessen werden konnten. Insbesondere wurden das Zellvolumen und der als Maß für die Effizienz des Photosyntheseapparats bzw. die Zellfitness geltende Chlorophyllfluoreszenzparameter Fv/Fm ermittelt und ein hohes Maß an Heterogenität dieser zellulären Parameter in verschiedenen Entwicklungsstadien der synchronisierten Chlamydomonas-Zellen festgestellt. Im zweiten Teil der Arbeit wurden die bildgebende Laser-Scanning-Mikroskopie und anschließende Bilddatenanalyse zur quantitativen Erfassung der wachstumsabhängigen zellulären Parameter angewandt. Ein kommerzielles konfokales Mikroskop wurde um die Möglichkeit der nichtlinearen Mikroskopie erweitert. Diese hat den Vorteil einer lokalisierten Anregung, damit verbunden einer höheren Ortsauflösung und insgesamt geringeren Probenbelastung. Weiterhin besteht neben der Signalgewinnung durch Fluoreszenzanregung die Möglichkeit der Erzeugung der Zweiten Harmonischen (SHG) an biophotonischen Strukturen, wie der zellulären Stärke. Anhand der Verteilungsfunktionen war es möglich mit Hilfe von modelltheoretischen Ansätzen zelluläre Parameter zu ermitteln, die messtechnisch nicht unmittelbar zugänglich sind. Die morphologischen Informationen der Bilddaten ermöglichten die Bestimmung der Zellvolumina und die Volumina subzellularer Strukturen, wie Nuclei, extranucleäre DNA oder Stärkegranula. Weiterhin konnte die Anzahl subzellulärer Strukturen innerhalb einer Zelle bzw. eines Zellverbunds ermittelt werden. Die Analyse der in den Bilddaten enthaltenen Signalintensitäten war Grundlage einer relativen Konzentrationsbestimmung von zellulären Komponenten, wie DNA bzw. Stärke. Mit dem hier vorgestellten Verfahren der nichtlinearen Mikroskopie und nachfolgender Bilddatenanalyse konnte erstmalig die Verteilung des zellulären Stärkegehalts in einer Chlamydomonas-Population während des Wachstums bzw. nach induziertem Stärkeabbau verfolgt werden. Im weiteren Verlauf wurde diese Methode auch auf Gefrierschnitte höherer Pflanzen, wie Arabidopsis thaliana, angewendet. Im Ergebnis wurde gezeigt, dass viele zelluläre Parameter, wie das Volumen, der zelluläre DNA- und Stärkegehalt bzw. die Anzahl der Stärkegranula durch eine Lognormalverteilung, mit wachstumsabhängiger Parametrisierung, beschrieben werden. Zelluläre Parameter, wie Stoffkonzentration und zelluläres Volumen, zeigen keine signifikanten Korrelationen zueinander, woraus geschlussfolgert werden muss, dass es ein hohes Maß an Heterogenität der zellulären Parameter innerhalb der synchronisierten Chlamydomonas-Populationen gibt. Diese Aussage gilt sowohl für die als homogenste Form geltenden Synchronkulturen von Chlamydomonas reinhardtii als auch für die gemessenen zellulären Parameter im intakten Zellverbund höherer Pflanzen. Dieses Ergebnis ist insbesondere für modelltheoretische Betrachtungen von Relevanz, die sich auf empirische Daten bzw. zelluläre Parameter stützen welche im Zellensemble gemessen wurden und somit nicht notwendigerweise den zellulären Status einer einzelnen Zelle repräsentieren.
Coupling of the electrical, mechanical and optical response in polymer/liquid-crystal composites
(2010)
Micrometer-sized liquid-crystal (LC) droplets embedded in a polymer matrix may enable optical switching in the composite film through the alignment of the LC director along an external electric field. When a ferroelectric material is used as host polymer, the electric field generated by the piezoelectric effect can orient the director of the LC under an applied mechanical stress, making these materials interesting candidates for piezo-optical devices. In this work, polymer-dispersed liquid crystals (PDLCs) are prepared from poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) and a nematic liquid crystal (LC). The anchoring effect is studied by means of dielectric relaxation spectroscopy. Two dispersion regions are observed in the dielectric spectra of the pure P(VDF-TrFE) film. They are related to the glass transition and to a charge-carrier relaxation, respectively. In PDLC films containing 10 and 60 wt% LC, an additional, bias-field-dependent relaxation peak is found that can be attributed to the motion of LC molecules. Due to the anchoring effect of the LC molecules, this relaxation process is slowed down considerably, when compared with the related process in the pure LC. The electro-optical and piezo-optical behavior of PDLC films containing 10 and 60 wt% LCs is investigated. In addition to the refractive-index mismatch between the polymer matrix and the LC molecules, the interaction between the polymer dipoles and the LC molecules at the droplet interface influences the light-scattering behavior of the PDLC films. For the first time, it was shown that the electric field generated by the application of a mechanical stress may lead to changes in the transmittance of a PDLC film. Such a piezo-optical PDLC material may be useful e.g. in sensing and visualization applications. Compared to a non-polar matrix polymer, the polar matrix polymer exhibits a strong interaction with the LC molecules at the polymer/LC interface which affects the electro-optical effect of the PDLC films and prevents a larger increase in optical transmission.
This thesis describes the development and application of the impacts module of the ICLIPS model, a global integrated assessment model of climate change. The presentation of the technical aspects of this model component is preceded by a discussion of the sociopolitical context for model-based integrated assessments, which defines important requirements for the specification of the model. Integrated assessment of climate change comprises a broad range of scientific efforts to support the decision-making about objectives and measures for climate policy, whereby many different approaches have been followed to provide policy-relevant information about climate impacts. Major challenges in this context are the large diversity of the relevant spatial and temporal scales, the multifactorial causation of many climate impacts', considerable scientific uncertainties, and the ambiguity associated with unavoidable normative evaluations. A hierarchical framework is presented for structuring climate impact assessments that reflects the evolution of their practice and of the underlying theory. Integrated assessment models of climate change (IAMs) are scientific tools that contain simplified representations of the relevant components of the coupled society-climate system. The major decision-analytical frameworks for IAMs are evaluated according to their ability to address important aspects of the pertinent social decision problem. The guardrail approach is presented as an inverse' framework for climate change decision support, which aims to identify the whole set of policy strategies that are compatible with a set of normatively specified constraints (guardrails'). This approach combines, to a certain degree, the scientific rigour and objectivity typical of predictive approaches with the ability to consider virtually all decision options that is at the core of optimization approaches. The ICLIPS model is described as the first IAM that implements the guardrail approach. The representation of climate impacts is a key concern in any IAM. A review of existing IAMs reveals large differences in the coverage of impact sectors, in the choice of the impact numeraire(s), in the consideration of non-climatic developments, including purposeful adaptation, in the handling of uncertainty, and in the inclusion of singular events. IAMs based on an inverse approach impose specific requirements to the representation of climate impacts. This representation needs to combine a level of detail and reliability that is sufficient for the specification of impact guardrails with the conciseness and efficiency that allows for an exploration of the complete domain of plausible climate protection strategies. Large-scale singular events can often be represented by dynamic reduced-form models. This approach, however, is less appropriate for regular impacts where the determination of policy-relevant results generally needs to consider the heterogeneity of climatic, environmental, and socioeconomic factors at the local or regional scale. Climate impact response functions (CIRFs) are identified as the most suitable reduced-form representation of regular climate impacts in the ICLIPS model. A CIRF depicts the aggregated response of a climate-sensitive system or sector as simulated by a spatially explicit sectoral impact model for a representative subset of plausible futures. In the CIRFs presented here, global mean temperature and atmospheric CO2 concentration are used as predictors for global and regional impacts on natural vegetation, agricultural crop production, and water availability. Application of a pattern scaling technique makes it possible to consider the regional and seasonal patterns in the climate anomalies simulated by several general circulation models while ensuring the efficiency of the dynamic model components. Efforts to provide quantitative estimates of future climate impacts generally face a trade-off between the relevance of an indicator for stakeholders and the exactness with which it can be determined. A number of non-monetary aggregated impact indicators for the CIRFs is presented, which aim to strike the balance between these two conflicting goals while taking into account additional constraints of the ICLIPS modelling framework. Various types of impact diagrams are used for the visualization of CIRFs, each of which provides a different perspective on the impact result space. The sheer number of CIRFs computed for the ICLIPS model precludes their comprehensive presentation in this thesis. Selected results referring to changes in the distribution of biomes in different biogeographical regions, in the agricultural potential of various countries, and in the water availability in selected major catchments are discussed. The full set of CIRFs is accessible via the ICLIPS Impacts Tool, a graphical user interface that provides convenient access to more than 100,000 impact diagrams developed for the ICLIPS model. The technical aspects of the software are described as well as the accompanying database of CIRFs. The most important application of CIRFs is in inverse' mode, where they are used to translate impact guardrails into simultaneous constraints for variables from the optimizing ICLIPS climate-economy model. This translation is facilitated by algorithms for the computation of reachable climate domains and for the parameterized approximation of admissible climate windows derived from CIRFs. The comprehensive set of CIRFs, together with these algorithms, enables the ICLIPS model to flexibly explore sets of climate policy strategies that explicitly comply with impact guardrails specified in biophysical units. This feature is not found in any other intertemporally optimizing IAM. A guardrail analysis with the integrated ICLIPS model is described that applies selected CIRFs for ecosystem changes. So-called necessary carbon emission corridors' are determined for a default choice of normative constraints that limit global vegetation impacts as well as regional mitigation costs, and for systematic variations of these constraints. A brief discussion of recent developments in integrated assessment modelling of climate change connects the work presented here with related efforts.
Proteine sind an praktisch allen Prozessen in lebenden Zellen maßgeblich beteiligt. Auch in der Biotechnologie werden Proteine in vielfältiger Weise eingesetzt.
Ein Protein besteht aus einer Kette von Aminosäuren. Häufig lagern sich mehrere dieser Ketten zu größeren Strukturen und Funktionseinheiten, sogenannten Proteinkomplexen,
zusammen. Kürzlich wurde gezeigt, dass eine Proteinkomplexbildung bereits während der Biosynthese der Proteine (co-translational) stattfinden kann
und nicht stets erst danach (post-translational) erfolgt. Da Fehlassemblierungen von Proteinen zu Funktionsverlusten und adversen Effekten führen, ist eine präzise und verlässliche Proteinkomplexbildung sowohl für zelluläre Prozesse als auch für biotechnologische Anwendungen essenziell. Mit experimentellen Methoden lassen sich zwar u.a. die Stöchiometrie und die Struktur von Proteinkomplexen bestimmen,
jedoch bisher nicht die Dynamik der Komplexbildung auf unterschiedlichen Zeitskalen. Daher sind grundlegende Mechanismen der Proteinkomplexbildung noch nicht vollständig verstanden. Die hier vorgestellte, auf experimentellen Erkenntnissen aufbauende, computergestützte Modellierung der Proteinkomplexbildung erlaubt eine umfassende Analyse des Einflusses physikalisch-chemischer Parameter
auf den Assemblierungsprozess. Die Modelle bilden möglichst realistisch die experimentellen Systeme der Kooperationspartner (Bar-Ziv, Weizmann-Institut, Israel; Bukau und Kramer, Universität Heidelberg) ab, um damit die Assemblierung von Proteinkomplexen einerseits in einem quasi-zweidimensionalen synthetischen Expressionssystem (in vitro) und andererseits im Bakterium Escherichia coli (in vivo) untersuchen zu können. Mit Hilfe eines vereinfachten Expressionssystems, in dem die Proteine nur an die Chip-Oberfläche, aber nicht aneinander binden können, wird das theoretische Modell parametrisiert. In diesem vereinfachten in-vitro-System durchläuft die Effizienz der Komplexbildung drei Regime – ein bindedominiertes Regime, ein Mischregime und ein produktionsdominiertes Regime. Ihr Maximum erreicht die Effizienz dabei kurz nach dem Übergang vom bindedominierten ins Mischregime und fällt anschließend monoton ab. Sowohl im nicht-vereinfachten in-vitro- als auch im in-vivo-System koexistieren je zwei konkurrierende Assemblierungspfade: Im in-vitro-System erfolgt die Komplexbildung entweder spontan in wässriger Lösung (Lösungsassemblierung) oder aber in einer definierten Schrittfolge an der Chip-Oberfläche (Oberflächenassemblierung); Im in-vivo-System konkurrieren hingegen die co- und die post-translationale Komplexbildung. Es zeigt sich, dass die Dominanz der Assemblierungspfade im in-vitro-System zeitabhängig ist und u.a. durch die Limitierung und Stärke der Bindestellen auf der Chip-Oberfläche beeinflusst werden kann. Im in-vivo-System hat der räumliche Abstand zwischen den Syntheseorten der beiden Proteinkomponenten nur dann einen Einfluss auf die Komplexbildung, wenn die Untereinheiten schnell degradieren. In diesem Fall dominiert die co-translationale Assemblierung auch auf kurzen Zeitskalen deutlich, wohingegen es bei stabilen Untereinheiten zu einem Wechsel von der Dominanz der post- hin zu einer geringen Dominanz der co-translationalen Assemblierung kommt. Mit den in-silico-Modellen lässt sich neben der Dynamik u.a. auch die Lokalisierung der Komplexbildung und -bindung darstellen, was einen Vergleich der theoretischen Vorhersagen mit experimentellen Daten und somit eine Validierung der Modelle ermöglicht. Der hier präsentierte in-silico Ansatz ergänzt die experimentellen Methoden, und erlaubt so, deren Ergebnisse zu interpretieren und neue Erkenntnisse davon abzuleiten.