Institut für Physik und Astronomie
Refine
Has Fulltext
- yes (712) (remove)
Year of publication
Document Type
- Doctoral Thesis (404)
- Postprint (136)
- Article (88)
- Preprint (35)
- Habilitation Thesis (21)
- Conference Proceeding (11)
- Master's Thesis (11)
- Monograph/Edited Volume (3)
- Course Material (2)
- Bachelor Thesis (1)
Keywords
- Synchronisation (19)
- diffusion (19)
- data analysis (13)
- synchronization (13)
- Datenanalyse (12)
- Klimawandel (10)
- Nichtlineare Dynamik (10)
- anomalous diffusion (10)
- Spektroskopie (9)
- Synchronization (9)
Institute
- Institut für Physik und Astronomie (712)
- Extern (55)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (36)
- Mathematisch-Naturwissenschaftliche Fakultät (7)
- Department Psychologie (4)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (4)
- Institut für Geowissenschaften (2)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (2)
- Department Linguistik (1)
- Institut für Biochemie und Biologie (1)
- Institut für Romanistik (1)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (1)
Monolithic perovskite silicon tandem solar cells can overcome the theoretical efficiency limit of silicon solar cells. This requires an optimum bandgap, high quantum efficiency, and high stability of the perovskite. Herein, a silicon heterojunction bottom cell is combined with a perovskite top cell, with an optimum bandgap of 1.68 eV in planar p-i-n tandem configuration. A methylammonium-free FA(0.75)Cs(0.25)Pb(I0.8Br0.2)(3) perovskite with high Cs content is investigated for improved stability. A 10% molarity increase to 1.1 m of the perovskite precursor solution results in approximate to 75 nm thicker absorber layers and 0.7 mA cm(-2) higher short-circuit current density. With the optimized absorber, tandem devices reach a high fill factor of 80% and up to 25.1% certified efficiency. The unencapsulated tandem device shows an efficiency improvement of 2.3% (absolute) over 5 months, showing the robustness of the absorber against degradation. Moreover, a photoluminescence quantum yield analysis reveals that with adapted charge transport materials and surface passivation, along with improved antireflection measures, the high bandgap perovskite absorber has the potential for 30% tandem efficiency in the near future.
We present results of full 3D hydrodynamical and radiative transfer simulations of the colliding stellar winds in the massive binary system η Carinae. We accomplish this by applying the SimpleX algorithm for 3D radiative transfer on an unstructured Voronoi-Delaunay grid to recent 3D smoothed particle hydrodynamics (SPH) simulations of the binary colliding winds. We use SimpleX to obtain detailed ionization fractions of hydrogen and helium, in 3D, at the resolution of the original SPH simulations. We investigate several computational domain sizes and Luminous Blue Variable primary star mass-loss rates. We furthermore present new methods of visualizing and interacting with output from complex 3D numerical simulations, including 3D interactive graphics and 3D printing. While we initially focus on η Car, the methods employed can be applied to numerous other colliding wind (WR 140, WR 137, WR 19) and dusty `pinwheel' (WR 104, WR 98a) binary systems. Coupled with 3D hydrodynamical simulations, SimpleX simulations have the potential to help determine the regions where various observed time-variable emission and absorption lines form in these unique objects.
We present 3D numerical simulations of the NGC6888 nebula considering the proper motion and the evolution of the star, from the red supergiant (RSG) to the Wolf-Rayet (WR) phase. Our simulations reproduce the limb-brightened morphology observed in [OIII] and X-ray emission maps. The synthetic maps computed by the numerical simulations show filamentary and clumpy structures produced by instabilities triggered in the interaction between the WR wind and the RSG shell.
Populärwissenschaftlicher Abstract: Bislang gibt es in der beobachtenden optischen Astronomie zwei verschiedene Herangehensweisen: Einerseits werden Objekte durch Kameras abbildend erfaßt, andererseits werden durch die wellenlängenabhängige Zerlegung ihres Lichtes Spektren gewonnen. Das Integral - Field - Verfahren ist eine relativ neue Technik, welche die genannten Beobachtungsmethoden vereint. Das Objektbild im Teleskopfokus wird in räumlich zerlegt und jedes Ortselement einem gemeinsamen Spektrografen zugeführt. Hierdurch wird das Objekt nicht nur zweidimensional räumlich erfaßt, sondern zusätzlich die spektrale Kompenente als dritte Dimension erhalten, weswegen das Verfahren auch als 3D-Methode bezeichnet wird. Anschaulich kann man sich das Datenresultat als eine Abbildung vorstellen, in der jeder einzelne Bildpunkt nicht mehr nur einen Intensitätswert enthält, sondern gleich ein ganzes Spektrum. Diese Technik ermöglicht es, ausgedehnte Objekte im Unterschied zu gängigen Spaltspektrografen komplett zu erfassen. Die besondere Stärke der Methode ist die Möglichkeit, die Hintergrundkontamination der unmittelbaren Umgebung des Objektes zu erfassen und in der Auswertung zu berücksichtigen. Durch diese Fähigkeit erscheint die 3D-Methode prädestiniert für den durch moderne Großteleskope erschlossenen Bereich der extragalaktischen Stellarastronomie. Die detaillierte Untersuchung aufgelöster stellare Populationen in nahegelegenen Galaxien ist erst seit kurzer Zeit dank der Fortschritte mit modernen Grossteleskopen und fortschrittlicher Instrumentierung möglich geworden. Wegen der Bedeutung für die Entstehung und Evolution von Galaxien werden diese Arbeiten zukünftig weiter an Bedeutung gewinnen. In der vorliegenden Arbeit wurde die Integral-Field-Spektroskopie an zwei planetarischen Nebeln in der nächstgelegenen großen Spiralgalaxie M31 (NGC 224) getestet, deren Helligkeiten und Koordinaten aus einer Durchmusterung vorlagen. Hierzu wurden Beobachtungen mit dem MPFS-Instrument am russischen 6m - Teleskop in Selentschuk/Kaukasus sowie mit INTEGRAL/WYFFOS am englischen William-Herschel-Teleskop auf La Palma gewonnen. Ein überraschendes Ergebnis war, daß eins der beiden Objekte falsch klassifiziert wurde. Sowohl die meßbare räumliche Ausdehnung des Objektes als auch das spektrale Erscheinungsbild schlossen die Identität mit einem planetarischen Nebel aus. Mit hoher Wahrscheinlichkeit handelt es sich um einen Supernovaüberrest, zumal im Rahmen der Fehler an gleicher Stelle eine vom Röntgensatelliten ROSAT detektierte Röntgenquelle liegt. Die in diesem Projekt verwendeten Integral-Field-Instrumente wiesen zwei verschiedene Bauweisen auf, die sich miteinander vergleichen ließen. Ein Hauptkritikpunkt der verwendeten Instrumente war ihre geringe Lichtausbeute. Die gesammelten Erfahrung fanden Eingang in das Konzept des derzeit in Potsdam in der Fertigung befindlichen 3D-Instruments PMAS (Potsdamer Multi - Apertur - Spektrophotometer), welcher zunächst für das 3.5m-Teleskop des Calar - Alto - Observatoriums in Südspanien vorgesehen ist. Um die Effizienz dieses Instrumentes zu verbessern, wurde in dieser Arbeit die Kopplung der zum Bildrasterung verwendeten Optik zu den Lichtleitfasern im Labor untersucht. Die Untersuchungen zur Maximierung von Lichtausbeute und Stabilität zeigen, daß sich die Effizienz durch Auswahl einer geeigneten Koppelmethode um etwa 20 Prozent steigern lässt.
The main objective of this work is to investigate the evolution of massive stars, and the interplay between them and the ionized gas for a sample of local metal-poor Wolf-Rayet galaxies.
Optical integral field spectrocopy was used in combination with multi-wavelength radio data.
Combining optical and radio data, we locate Wolf-Rayet stars and supernova remnants across the Wolf-Rayet galaxies to study the spatial correlation between them. This study will shed light on the massive star formation and its feedback, and will help us to better understand
distant star-forming galaxies.
In this paper two groups supporting different views on the mechanism of light induced polymer deformation argue about the respective underlying theoretical conceptions, in order to bring this interesting debate to the attention of the scientific community. The group of Prof. Nicolae Hurduc supports the model claiming that the cyclic isomerization of azobenzenes may cause an athermal transition of the glassy azobenzene containing polymer into a fluid state, the so-called photo-fluidization concept. This concept is quite convenient for an intuitive understanding of the deformation process as an anisotropic flow of the polymer material. The group of Prof. Svetlana Santer supports the re-orientational model where the mass-transport of the polymer material accomplished during polymer deformation is stated to be generated by the light-induced re-orientation of the azobenzene side chains and as a consequence of the polymer backbone that in turn results in local mechanical stress, which is enough to irreversibly deform an azobenzene containing material even in the glassy state. For the debate we chose three polymers differing in the glass transition temperature, 32 °C, 87 °C and 95 °C, representing extreme cases of flexible and rigid materials. Polymer film deformation occurring during irradiation with different interference patterns is recorded using a homemade set-up combining an optical part for the generation of interference patterns and an atomic force microscope for acquiring the kinetics of film deformation. We also demonstrated the unique behaviour of azobenzene containing polymeric films to switch the topography in situ and reversibly by changing the irradiation conditions. We discuss the results of reversible deformation of three polymers induced by irradiation with intensity (IIP) and polarization (PIP) interference patterns, and the light of homogeneous intensity in terms of two approaches: the re-orientational and the photo-fluidization concepts. Both agree in that the formation of opto-mechanically induced stresses is a necessary prerequisite for the process of deformation. Using this argument, the deformation process can be characterized either as a flow or mass transport.
We analyse whether a stellar atmosphere model computed with the code CMFGEN provides an optimal description of the stellar observations of WR 136 and simultaneously reproduces the nebular observations of NGC 6888, such as the ionization degree, which is modelled with the pyCloudy code. All the observational material available (far and near UV and optical spectra) were used to constrain such models. We found that the stellar temperature T∗, at τ = 20, can be in a range between 70 000 and 110 000 K, but when using the nebula as an additional restriction, we found that the stellar models with T∗ ∼ 70 000 K represent the best solution for both, the star and the nebula.
Dark matter, DM, has not yet been directly observed, but it has a very solid theoretical basis. There are observations that provide indirect evidence, like galactic rotation curves that show that the galaxies are rotating too fast to keep their constituent parts, and galaxy clusters that bends the light coming from behind-lying galaxies more than expected with respect to the mass that can be calculated from what can be visibly seen. These observations, among many others, can be explained with theories that include DM. The missing piece is to detect something that can exclusively be explained by DM. Direct observation in a particle accelerator is one way and indirect detection using telescopes is another. This thesis is focused on the latter method.
The Very Energetic Radiation Imaging Telescope Array System, V ERITAS, is a telescope array that detects Cherenkov radiation. Theory predicts that DM particles annihilate into, e.g., a γγ pair and create a distinctive energy spectrum when detected by such telescopes, e.i., a monoenergetic line at the same energy as the particle mass. This so called ”smoking-gun” signature is sought with a sliding window line search within the sub-range ∼ 0.3 − 10 TeV of the VERITAS energy range, ∼ 0.01 − 30 TeV.
Standard analysis within the VERITAS collaboration uses Hillas analysis and look-up tables, acquired by analysing particle simulations, to calculate the energy of the particle causing the Cherenkov shower. In this thesis, an improved analysis method has been used. Modelling each shower as a 3Dgaussian should increase the energy recreation quality. Five dwarf spheroidal galaxies were chosen as targets with a total of ∼ 224 hours of data. The targets were analysed individually and stacked. Particle simulations were based on two simulation packages, CARE and GrISU.
Improvements have been made to the energy resolution and bias correction, up to a few percent each, in comparison to standard analysis. Nevertheless, no line with a relevant significance has been detected. The most promising line is at an energy of ∼ 422 GeV with an upper limit cross section of 8.10 · 10^−24 cm^3 s^−1 and a significance of ∼ 2.73 σ, before trials correction and ∼ 1.56 σ after. Upper limit cross sections have also been calculated for the γγ annihilation process and four other outcomes. The limits are in line with current limits using other methods, from ∼ 8.56 · 10^−26 − 6.61 · 10^−23 cm^3s^−1. Future larger telescope arrays, like the upcoming Cherenkov Telescope Array, CTA, will provide better results with the help of this analysis method.
In the context of cosmological structure formation sheets, filaments and eventually halos form due to gravitational instabilities. It is noteworthy, that at all times, the majority of the baryons in the universe does not reside in the dense halos but in the filaments and the sheets of the intergalactic medium. While at higher redshifts of z > 2, these baryons can be detected via the absorption of light (originating from more distant sources) by neutral hydrogen at temperatures of T ~ 10^4 K (the Lyman-alpha forest), at lower redshifts only about 20 % can be found in this state. The remain (about 50 to 70 % of the total baryons mass) is unaccounted for by observational means. Numerical simulations predict that these missing baryons could reside in the filaments and sheets of the cosmic web at high temperatures of T = 10^4.5 - 10^7 K, but only at low to intermediate densities, and constitutes the warm-hot intergalactic medium (WHIM). The high temperatures of the WHIM are caused by the formation of shocks and the subsequent shock-heating of the gas. This results in a high degree of ionization and renders the reliable detection of the WHIM a challenging task. Recent high-resolution hydrodynamical simulations indicate that, at redshifts of z ~ 2, filaments are able to provide very massive galaxies with a significant amount of cool gas at temperatures of T ~ 10^4 K. This could have an important impact on the star-formation in those galaxies. It is therefore of principle importance to investigate the particular hydro- and thermodynamical conditions of these large filament structures. Density and temperature profiles, and velocity fields, are expected to leave their special imprint on spectroscopic observations. A potential multiphase structure may act as tracer in observational studies of the WHIM. In the context of cold streams, it is important to explore the processes, which regulate the amount of gas transported by the streams. This includes the time evolution of filaments, as well as possible quenching mechanisms. In this context, the halo mass range in which cold stream accretion occurs is of particular interest. In order to address these questions, we perform particular hydrodynamical simulations of very high resolution, and investigate the formation and evolution of prototype structures representing the typical filaments and sheets of the WHIM. We start with a comprehensive study of the one-dimensional collapse of a sinusoidal density perturbation (pancake formation) and examine the influence of radiative cooling, heating due to an UV background, thermal conduction, and the effect of small-scale perturbations given by the cosmological power spectrum. We use a set of simulations, parametrized by the wave length of the initial perturbation L. For L ~ 2 Mpc/h the collapse leads to shock-confined structures. As a result of radiative cooling and of heating due to an UV background, a relatively cold and dense core forms. With increasing L the core becomes denser and more concentrated. Thermal conduction enhances this trend and may lead to an evaporation of the core at very large L ~ 30 Mpc/h. When extending our simulations into three dimensions, instead of a pancake structure, we obtain a configuration consisting of well-defined sheets, filaments, and a gaseous halo. For L > 4 Mpc/h filaments form, which are fully confined by an accretion shock. As with the one-dimensional pancakes, they exhibit an isothermal core. Thus, our results confirm a multiphase structure, which may generate particular spectral tracers. We find that, after its formation, the core becomes shielded against further infall of gas onto the filament, and its mass content decreases with time. In the vicinity of the halo, the filament's core can be attributed to the cold streams found in other studies. We show, that the basic structure of these cold streams exists from the very beginning of the collapse process. Further on, the cross section of the streams is constricted by the outwards moving accretion shock of the halo. Thermal conduction leads to a complete evaporation of the cold stream for L > 6 Mpc/h. This corresponds to halos with a total mass higher than M_halo = 10^13 M_sun, and predicts that in more massive halos star-formation can not be sustained by cold streams. Far away from the gaseous halo, the temperature gradients in the filament are not sufficiently strong for thermal conduction to be effective.
This work reviews the literature on an alleged global warming 'pause' in global mean surface temperature (GMST) to determine how it has been defined, what time intervals are used to characterise it, what data are used to measure it, and what methods used to assess it. We test for 'pauses', both in the normally understood meaning of the term to mean no warming trend, as well as for a 'pause' defined as a substantially slower trend in GMST. The tests are carried out with the historical versions of GMST that existed for each pause-interval tested, and with current versions of each of the GMST datasets. The tests are conducted following the common (but questionable) practice of breaking the linear fit at the start of the trend interval ('broken' trends), and also with trends that are continuous with the data bordering the trend interval. We also compare results when appropriate allowance is made for the selection bias problem. The results show that there is little or no statistical evidence for a lack of trend or slower trend in GMST using either the historical data or the current data. The perception that there was a 'pause' in GMST was bolstered by earlier biases in the data in combination with incomplete statistical testing.