TY - THES A1 - Donner, Reik Volker T1 - Advanced methods for analysing and modelling multivariate palaeoclimatic time series T1 - Moderne Verfahren zur Analyse und Modellierung multivariater paläoklimatischer Zeitreihen N2 - The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season. N2 - Die Separation natürlicher und anthropogen verursachter Klimaänderungen ist eine bedeutende Aufgabe der heutigen Klimaforschung. Hierzu ist eine detaillierte Kenntnis der natürlichen Klimavariabilität während Warmzeiten unerlässlich. Neben Modellsimulationen und historischen Aufzeichnungen spielt hierfür die Analyse von sogenannten Klima-Stellvertreterdaten eine besondere Rolle, die anhand von Archiven wie Baumringen oder Sediment- und Eisbohrkernen erhoben werden. Um solche Quellen paläoklimatischer Informationen vernünftig interpretieren zu können, werden geeignete statistische Modellierungsansätze sowie Methoden der Zeitreihenanalyse benötigt, die insbesondere auf kurze, verrauschte und instationäre uni- und multivariate Datensätze anwendbar sind. Korrelationen zwischen verschiedenen Stellvertreterdaten eines oder mehrerer klimatologischer Archive enthalten wesentliche Informationen über den Klimawandel auf großen Zeitskalen. Auf der Basis einer geeigneten Zerlegung solcher multivariater Zeitreihen lassen sich Dimensionen schätzen als die Zahl der signifikanten, linear unabhängigen Komponenten des Datensatzes. Ein entsprechender Ansatz wird in der vorliegenden Arbeit vorgestellt, kritisch diskutiert und im Hinblick auf die Analyse von paläoklimatischen Zeitreihen weiterentwickelt. Zeitliche Variationen der entsprechenden Maße erlauben Rückschlüsse auf klimatische Veränderungen. Am Beispiel von Elementhäufigkeiten und Korngrößenverteilungen des Cape-Roberts-Gebietes in der Ostantarktis wird gezeigt, dass die Variabilität der Dimension der untersuchten Datensätze klar mit dem Übergang vom Oligozän zum Miozän vor etwa 24 Millionen Jahren sowie regionalen Abschmelzereignissen korreliert. Korngrößenverteilungen in Sedimenten erlauben Rückschlüsse auf die Dominanz verschiedenen Transport- und Ablagerungsmechanismen. Mit Hilfe von Finite-Mixture-Modellen lassen sich gemessene Verteilungsfunktionen geeignet approximieren. Um die statistische Unsicherheit der Parameterschätzung in solchen Modellen umfassend zu beschreiben, wird das Konzept der asymptotischen Unsicherheitsverteilungen eingeführt. Der Zusammenhang mit dem Überlapp der einzelnen Komponenten und aufgrund des Abschneidens und Binnens der gemessenen Daten verloren gehenden Informationen wird anhand eines geologischen Beispiels diskutiert. Die Analyse einer Sequenz von Korngrößenverteilungen aus dem Baikalsee zeigt, dass bei der Anwendung von Finite-Mixture-Modellen bestimmte Probleme auftreten, die eine umfassende klimatische Interpretation der Ergebnisse verhindern. Stattdessen wird eine lineare Hauptkomponentenanalyse verwendet, um den Datensatz in geeignete Fraktionen zu zerlegen, deren zeitliche Variabilität stark mit den Schwankungen der mittleren Sonneneinstrahlung auf der Zeitskala von Jahrtausenden bis Jahrzehntausenden korreliert. Die Häufigkeit von grobkörnigem Material hängt offenbar mit der jährlichen Schneebedeckung zusammen, während feinkörniges Material möglicherweise zu einem bestimmten Anteil durch Frühjahrsstürme aus der Taklamakan-Wüste herantransportiert wird. KW - Zeitreihenanalyse KW - Paläoklimatologie KW - Multivariate Statistik KW - Korngrößenverteilungen KW - Time Series Analysis KW - Palaeoclimatology KW - Multivariate Statistics KW - Grain-size distributions Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-12560 ER - TY - THES A1 - Teppner, Randolf T1 - Adsorptionsschichten an fluiden Grenzflächen : Skalengesetze und Ionenverteilungen N2 - In dieser Arbeit wurden zwei Themenbereiche bearbeitet: 1. Ellipsometrie an Adsorpionsschichten niedermolekularer Tenside an der Wasser/Luft-Grenzfläche (Ellipsometrie ist geeignet, adsorbierte Mengen von nicht- und zwitterionischen Tensiden zu messen, bei ionischen werden zusätzlich die Gegenionen mit erfaßt; Ellipsometrie mißt sich ändernde Gegenionenverteilung). 2. Ellipsometrische Untersuchung von endadsorbierten Polymerbürsten an der Wasser/Öl-Grenzfläche (Ellipsometrie ist nicht in der Lage, verschiedene Segmentkonzentrationsprofile innerhalb der Bürste aufzulösen, ist aber sehr wohl geeignet, Skalengesetze für Dicken und Drücke in Abhängigkeit von Ankerdichte und Kettenlänge der Polymere zu überprüfen; für in Heptan gequollene Poly-isobuten-Bürsten konnte gezeigt werden, daß sie sich entsprechend den theoretischen Vorhersagen für Bürsten in einem theta-Lösungsmittel verhalten) N2 - In this publication two subjects are dealt with: 1. Ellipsometry on adsorption layers of low molecular weight surfactants at the air/water-interface (Ellipsometry is suitable to measure adsorbed amounts of non-ionic surfactants, whereas this is impossible for ionic surfactants; in the latter case the ellipsometric signal is strongly influenced by the counter ion distribution; ellipsometry can measure changes in the counter ion distribution) 2. Ellipsometric investigation of polymer brushes anchored to the oil/water-interface (Ellipsometry is not able to distinguish between different segmental concentration profiles within the brush, but it is nevertheless suitable to check scaling laws for brush height and pressure in dependence on anchor density and degree of polymerization of the polymers; it could be shown, that brushes of poly-isobutylene swollen in heptane behave as predicted for brushes in a theta-solvent) KW - Ellipsometrie KW - Adsorptionsschichten KW - Ionenverteilungen KW - Polymerbürsten KW - Skalengesetze Y1 - 2000 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-0000117 ER - TY - THES A1 - Antonelli, Andrea T1 - Accurate waveform models for gravitational-wave astrophysics: synergetic approaches from analytical relativity N2 - Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data. For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors. N2 - Wenn zwei kompakte Objekte wie Schwarze Löcher oder Neutronensterne kollidieren, wird der Raum und die Zeit um sie herum stark gekrümmt. Der effekt sind Störungen der Raumzeit, sogenannte Gravitationswellen, die sich im gesamten Universum ausbreiten. Mit den leistungsstarken und präzisen Netzwerken von Detektoren und der Arbeit vieler Wissenschaftler rund um den Globus kann man Gravitationswellen auf der Erde messen. Gravitationswellen tragen Informationen über das System, das sie erzeugt hat. Insbesondere kann man erfahren, wie sich die kompakten Objekte gebildet haben und woraus sie bestehen. Daraus lässt sich ableiten, wie sich das Universum ausdehnt, und man kann die Allgemeine Relativitätstheorie in Regionen mit starker Gravitation testen. Um diese Informationen zu extrahieren, werden genaue Modelle benötigt. Modelle können entweder numerisch durch Lösen der berühmten Einstein-Gleichungen oder analytisch durch Annäherung an deren Lösungen gewonnen werden. In meiner Arbeit haben wir den zweiten Ansatz verfolgt, um sehr genaue Vorhersagen für die Signale zu erhalten, die bei kommenden Beobachtungen durch Gravitationswellendetektoren verwendet werden können. KW - gravitational waves KW - Gravitationswellen KW - general relativity KW - allgemeine Relativitätstheorie KW - data analysis KW - Datenanalyse Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-576671 ER - TY - THES A1 - Kellermann, Thorsten T1 - Accurate numerical relativity simulations of non-vacuumspace-times in two dimensions and applications to critical collapse T1 - Exakte numerisch relativistische Simulationen der Nicht-Vakuum-Raum-Zeit in zwei Dimensionen und deren Anwendung zu Problemen des kritischen Kollaps N2 - This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse. N2 - Diese Arbeit legt seinen Schwerpunkt auf die Physik von Neutronensternen und deren Beschreibung mit Methoden der numerischen Relativitätstheorie. Im ersten Schritt wird eine neue numerische Umgebung, der Whisky2D Code entwickelt, dieser löst die relativistischen Gleichungen der Hydrodynamik in Axialymmetrie. Hierzu betrachten wir eine verbesserte Formulierung der sog. "flux conserved formulation" der Gleichungen. Im zweiten Teil wird der neue Code verwendet , um das kritische Verhalten zweier kollidierenden Neutronensternen zu untersuchen. In Anbetracht der Analogie, um Übergänge in der statistischen Physik Phase werden wir die Entwicklung der Entropie der Neutronensterne während des gesamten Prozesses betrachten. Ein besseres Verständnis der Evolution von thermodynamischen Größen, wie der Entropie in kritischer Prozess, sollte zu einem tieferen Verständnis der relativistischen Thermodynamik führen. Der Whisky2D Code, zur Lösung Gleichungen relativistischer Hydrodynamik wurde in einer „flux conserved form“ und in zylindrischen Koordinaten geschrieben. Hierdurch entstehen 1 / r singuläre Terme, wobei r der ist, die entsprechend behandelt werden müssen. In früheren Arbeiten, wird der Operator expandiert und die 1 / r spezifisch Therme auf die rechte Seite geschrieben, so dass die linke Seite eine Form annimmt, die identisch ist mit der kartesischen Formulierung. Wir nennen dies die Standard-Formulierung. Eine andere Möglichkeit ist, die Terme nicht zu expandieren, den und den 1/r Term in die Gleichung hinein zu ziehen. Wir nennen dies die Neue-Formulierung. Die neuen Gleichungen werden mit den gleichen Verfahren wie im kartesischen Fall gelöst. Aus mathematischer Sicht ist keine Unterschiede zwischen den beiden Formulierungen zu erwarten, erst die numerische Sicht zeigt die Unterschiede auf. Versuche zeigen, dass die Neue-Formulierung numerische Fehler um mehrere Größenordnungen reduziert. Der zweite Teil der Dissertation verwendet den neuen Code für die Untersuchung kritischer Phänomene in der allgemeinen Relativitätstheorie. Insbesondere betrachten wir die Kopf-auf-Kollision zweier Neutronensterne in einem Bereich des Parameter Raums, deren zwei mögliche Endzustände entweder einen neuen stabilen Neutronenstern oder ein Schwarzes Loch darstellen. Im Jahr 1993, betrachtete Choptuik Ein-Parameter-Familien von Lösungen, S [P], der Einstein-Klein-Gordon-Gleichung für ein masseloses Skalarfeld in sphärischer Symmetrie, so dass für jedes P> P ⋆, S[P] ein Schwarzes Loch enthalten ist und jedes P