530 Physik
Refine
Year of publication
- 2006 (51) (remove)
Document Type
- Article (30)
- Doctoral Thesis (16)
- Other (2)
- Preprint (2)
- Postprint (1)
Keywords
- Nichtlineare Dynamik (3)
- nonlinear dynamics (3)
- Klimatologie (2)
- Monsun (2)
- Polyelektrolyt (2)
- Unsicherheitsanalyse (2)
- Zeitreihenanalyse (2)
- bifurcation analysis (2)
- uncertainty analysis (2)
- 1,3,4-oxadiazole (1)
Institute
We develop a model of stochastic radiation pressure for rotating non-spherical particles and apply the model to circumplanetary dynamics of dust grains. The stochastic properties of the radiation pressure are related to the ensemble-averaged characteristics of the rotating particles, which are given in terms of the rotational time-correlation function of a grain. We investigate the model analytically and show that an ensemble of particle trajectories demonstrates a diffusion-like behaviour. The analytical results are compared with numerical simulations, performed for the motion of the dusty ejecta from Deimos in orbit around Mars. We find that the theoretical predictions are in a good agreement with the simulation results. The agreement however deteriorates at later time, when the impact of non-linear terms, neglected in the analytic approach, becomes significant. Our results indicate that the stochastic modulation of the radiation pressure can play an important role in the circumplanetary dynamics of dust and may in case of some dusty systems noticeably alter an optical depth. (c) 2006 Elsevier Ltd. All rights reserved.
The velocity distribution function of granular gases in the homogeneous cooling state as well as some heated granular gases decays for large velocities as f proportional to exp(-const x nu). That is, its high-energy tail is overpopulated as compared with the Maxwell distribution. At the present time, there is no theory to describe the influence of the tail on the kinetic characteristics of granular gases. We develop an approach to quantify the overpopulated tail and analyze its impact on granular gas properties, in particular on the cooling coefficient. We observe and explain anomalously slow relaxation of the velocity distribution function to its steady state.
Context. Very massive stars pass through the Wolf-Rayet (WR) stage before they finally explode. Details of their evolution have not yet been safely established, and their physics are not well understood. Their spectral analysis requires adequate model atmospheres, which have been developed step by step during the past decades and account in their recent version for line blanketing by the millions of lines from iron and iron-group elements. However, only very few WN stars have been re-analyzed by means of line-blanketed models yet. Aims. The quantitative spectral analysis of a large sample of Galactic WN stars with the most advanced generation of model atmospheres should provide an empirical basis for various studies about the origin, evolution, and physics of the Wolf-Rayet stars and their powerful winds. Methods. We analyze a large sample of Galactic WN stars by means of the Potsdam Wolf-Rayet (PoWR) model atmospheres, which account for iron line blanketing and clumping. The results are compared with a synthetic population, generated from the Geneva tracks for massive star evolution. Results. We obtain a homogeneous set of stellar and atmospheric parameters for the GalacticWN stars, partly revising earlier results. Conclusions. Comparing the results of our spectral analyses of the Galactic WN stars with the predictions of the Geneva evolutionary calculations, we conclude that there is rough qualitative agreement. However, the quantitative discrepancies are still severe, and there is no preference for the tracks that account for the effects of rotation. It seems that the evolution of massive stars is still not satisfactorily understood.
This paper discusses translocation features of the 20S proteasome in order to explain typical proteasome length distributions. We assume that the protein transport depends significantly on the fragment length with some optimal length which is transported most efficiently. By means of a simple one-channel model, we show that this hypothesis can explain both the one- and the three-peak length distributions found in experiments. A possible mechanism of such translocation is provided by so-called fluctuation-driven transport.
Three methods for the determination of the surface tension of liquids based on force measurements namely, the vertical plate method of Wilhelmy, the frame method of Lenard and the ring method of du Nouy are compared and studied in respect of a common principle of correction. It is shown that these three most important force-based methods allow the determination of the surface tension under static conditions. The force components of the corresponding liquid column below the measuring wire obtained for the straight part of the withdrawal curve up to the transition in its curved part provides exact surface tension values. The experimentally accessible value of the force component describes the physical background of the measured value correction contrary to the approximate equations obtained by mathematical way. Usually the determination of surface tension of liquids is based merely at the vertical plate method on exact equations thermodynamically derived whereas in the case of the frame and ring methods correction factors in approximate equations are used. At usual application of the force-based methods under the non-static condition of the withdrawal of a liquid column, the force maximum measured at withdrawal of the measuring object (plate, frame, or ring) is the basis for the determination of surface tension. In these cases, the measured surface tension values are compensated by correction equations for the frame and ring methods which are based on an correction factor and correction tables empirically obtained. The surface tension values obtained in this usual way agree with those obtained by using the force component of the corresponding liquid column below the measuring wire for the straight part of the withdrawal curve up to the transition in its curved part. Problems arising at the force measurements with increasing thickness of the measuring wires inside and outside the rings are discussed.
Recurrence plot analyses suggest a novel reference system involved in newborn spontaneous movements
(2006)
The movements of newborns have been thoroughly studied in terms of reflexes, muscle synergies, leg coordination, and target-directed arm/hand movements. Since these approaches have concentrated mainly on separate accomplishments, there has remained a clear need for more integrated investigations. Here, we report an inquiry in which we explicitly concentrated on taking such a perspective and, additionally, were guided by the methodological concept of home base behavior, which Ilan Golard developed for studies of exploratory behavior in animals. Methods from nonlinear dynamics, such as symbolic dynamics and recurrence plot analyses of kinematic data received from audiovisual newborn recordings, yielded new insights into the spatial and temporal organization of limb movements. In the framework of home base behavior, our approach uncovered a novel reference system of spontaneous newborn movements.
Experimental results show that the polymerization of pyrrole in the presence of beta-naphthalenesulfonic acid and different fluorosurfactants like perfluorooctanesulfonic acid, perfluorooctyldiethanolamide, and ammonium perfluorooctanoate leads to polypyrrole with special morphologies, such as rings or disks and rectangular frames or plates. The formation of these unusually shaped particles of polymer dispersions is explained by the chemical and colloidal peculiarities of the oxidative pyrrole polymerization with ammonium peroxodisulfate in aqueous medium.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.
Uncertainty about the sensitivity of the climate system to changes in the Earth’s radiative balance constitutes a primary source of uncertainty for climate projections. Given the continuous increase in atmospheric greenhouse gas concentrations, constraining the uncertainty range in such type of sensitivity is of vital importance. A common measure for expressing this key characteristic for climate models is the climate sensitivity, defined as the simulated change in global-mean equilibrium temperature resulting from a doubling of atmospheric CO2 concentration. The broad range of climate sensitivity estimates (1.5-4.5°C as given in the last Assessment Report of the Intergovernmental Panel on Climate Change, 2001), inferred from comprehensive climate models, illustrates that the strength of simulated feedback mechanisms varies strongly among different models. The central goal of this thesis is to constrain uncertainty in climate sensitivity. For this objective we first generate a large ensemble of model simulations, covering different feedback strengths, and then request their consistency with present-day observational data and proxy-data from the Last Glacial Maximum (LGM). Our analyses are based on an ensemble of fully-coupled simulations, that were realized with a climate model of intermediate complexity (CLIMBER-2). These model versions cover a broad range of different climate sensitivities, ranging from 1.3 to 5.5°C, and have been generated by simultaneously perturbing a set of 11 model parameters. The analysis of the simulated model feedbacks reveals that the spread in climate sensitivity results from different realizations of the feedback strengths in water vapour, clouds, lapse rate and albedo. The calculated spread in the sum of all feedbacks spans almost the entire plausible range inferred from a sampling of more complex models. We show that the requirement for consistency between simulated pre-industrial climate and a set of seven global-mean data constraints represents a comparatively weak test for model sensitivity (the data constrain climate sensitivity to 1.3-4.9°C). Analyses of the simulated latitudinal profile and of the seasonal cycle suggest that additional present-day data constraints, based on these characteristics, do not further constrain uncertainty in climate sensitivity. The novel approach presented in this thesis consists in systematically combining a large set of LGM simulations with data information from reconstructed regional glacial cooling. Irrespective of uncertainties in model parameters and feedback strengths, the set of our model versions reveals a close link between the simulated warming due to a doubling of CO2, and the cooling obtained for the LGM. Based on this close relationship between past and future temperature evolution, we define a method (based on linear regression) that allows us to estimate robust 5-95% quantiles for climate sensitivity. We thus constrain the range of climate sensitivity to 1.3-3.5°C using proxy-data from the LGM at low and high latitudes. Uncertainties in glacial radiative forcing enlarge this estimate to 1.2-4.3°C, whereas the assumption of large structural uncertainties may increase the upper limit by an additional degree. Using proxy-based data constraints for tropical and Antarctic cooling we show that very different absolute temperature changes in high and low latitudes all yield very similar estimates of climate sensitivity. On the whole, this thesis highlights that LGM proxy-data information can offer an effective means of constraining the uncertainty range in climate sensitivity and thus underlines the potential of paleo-climatic data to reduce uncertainty in future climate projections.
Stars are born in turbulent molecular clouds that fragment and collapse under the influence of their own gravity, forming a cluster of hundred or more stars. The star formation process is controlled by the interplay between supersonic turbulence and gravity. In this work, the properties of stellar clusters created by numerical simulations of gravoturbulent fragmentation are compared to those from observations. This includes the analysis of properties of individual protostars as well as statistical properties of the entire cluster. It is demonstrated that protostellar mass accretion is a highly dynamical and time-variant process. The peak accretion rate is reached shortly after the formation of the protostellar core. It is about one order of magnitude higher than the constant accretion rate predicted by the collapse of a classical singular isothermal sphere, in agreement with the observations. For a more reasonable comparison, the model accretion rates are converted to the observables bolometric temperature, bolometric luminosity, and envelope mass. The accretion rates from the simulations are used as input for an evolutionary scheme. The resulting distribution in the Tbol-Lbol-Menv parameter space is then compared to observational data by means of a 3D Kolmogorov-Smirnov test. The highest probability found that the distributions of model tracks and observational data points are drawn from the same population is 70%. The ratios of objects belonging to different evolutionary classes in observed star-forming clusters are compared to the temporal evolution of the gravoturbulent models in order to estimate the evolutionary stage of a cluster. While it is difficult to estimate absolute ages, the realtive numbers of young stars reveal the evolutionary status of a cluster with respect to other clusters. The sequence shows Serpens as the youngest and IC 348 as the most evolved of the investigated clusters. Finally the structures of young star clusters are investigated by applying different statistical methods like the normalised mean correlation length and the minimum spanning tree technique and by a newly defined measure for the cluster elongation. The clustering parameters of the model clusters correspond in many cases well to those from observed ones. The temporal evolution of the clustering parameters shows that the star cluster builds up from several subclusters and evolves to a more centrally concentrated cluster, while the cluster expands slower than new stars are formed.
In the present work, phenomena in the ionosphere are studied, which are connected with earthquakes (16 events) having a depth of less than 50 km and a magnitude M larger than 4. Analysed are night-time Es-spread effects using data of the vertical sounding station Petropavlovsk- Kanchatsky (φ=53.0°, λ=158.7°) from May 2004 until August 2004 registered every 15 minutes. It is found that the maximum distance of the earthquake from the sounding station, where pre-seismic phenomena are yet observable, depends on the magnitude of the earthquake. Further it is shown that 1-2 days before the earthquakes, in the premidnight hours, the appearance of Es-spread increases. The reliability of this increase amounts to 0.95.
The statistical analysis of the variations of the dayly-mean frequency of the maximum ionospheric electron density foF2 is performed in connection with the occurrence of (more than 60) earthquakes with magnitudes M > 6.0, depths h < 80 km and distances from the vertical sounding station R < 1000 km. For the study, data of the Tokyo sounding station are used, which were registered every hour in the years 1957-1990. It is shown that, on the average, foF2 decreases before the earthquakes. One day before the shock the decrease amounts to about 5 %. The statistical reliability of this phenomenon is obtained to be better than 0.95. Further, the variations of the occurrence probability of the turbulization of the F-layer (F spread) are investigated for (more than 260) earthquakes with M > 5.5, h < 80 km, R < 1000 km. For the analysis, data of the Japanese station Akita from 1969-1990 are used, which were obtained every hour. It is found that before the earthquakes the occurrence probability of F spread decreases. In the week before the event, the decrease has values of more than 10 %. The statistical reliability of this phenomenon is also larger than 0.95. Examining the seismo-ionospheric effects, here periods of time with weak heliogeomagnetic disturbances are considered, the Wolf number is less than 100 and the index ∑ Kp is smaller than 30.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
Since their discovery in 1610 by Galileo Galilei, Saturn's rings continue to fascinate both experts and amateurs. Countless numbers of icy grains in almost Keplerian orbits reveal a wealth of structures such as ringlets, voids and gaps, wakes and waves, and many more. Grains are found to increase in size with increasing radial distance to Saturn. Recently discovered "propeller" structures in the Cassini spacecraft data, provide evidence for the existence of embedded moonlets. In the wake of these findings, the discussion resumes about origin and evolution of planetary rings, and growth processes in tidal environments. In this thesis, a contact model for binary adhesive, viscoelastic collisions is developed that accounts for agglomeration as well as restitution. Collisional outcomes are crucially determined by the impact speed and masses of the collision partners and yield a maximal impact velocity at which agglomeration still occurs. Based on the latter, a self-consistent kinetic concept is proposed. The model considers all possible collisional outcomes as there are coagulation, restitution, and fragmentation. Emphasizing the evolution of the mass spectrum and furthermore concentrating on coagulation alone, a coagulation equation, including a restricted sticking probability is derived. The otherwise phenomenological Smoluchowski equation is reproduced from basic principles and denotes a limit case to the derived coagulation equation. Qualitative and quantitative analysis of the relevance of adhesion to force-free granular gases and to those under the influence of Keplerian shear is investigated. Capture probability, agglomerate stability, and the mass spectrum evolution are investigated in the context of adhesive interactions. A size dependent radial limit distance from the central planet is obtained refining the Roche criterion. Furthermore, capture probability in the presence of adhesion is generally different compared to the case of pure gravitational capture. In contrast to a Smoluchowski-type evolution of the mass spectrum, numerical simulations of the obtained coagulation equation revealed, that a transition from smaller grains to larger bodies cannot occur via a collisional cascade alone. For parameters used in this study, effective growth ceases at an average size of centimeters.
When Galactic microlensing events of stars are observed, one usually measures a symmetric light curve corresponding to a single lens, or an asymmetric light curve, often with caustic crossings, in the case of a binary lens system. In principle, the fraction of binary stars at a certain separation range can be estimated based on the number of measured microlensing events. However, a binary system may produce a light curve which can be fitted well as a single lens light curve, in particullary if the data sampling is poor and the errorbars are large. We investigate what fraction of microlensing events produced by binary stars for different separations may be well fitted by and hence misinterpreted as single lens events for various observational conditions. We find that this fraction strongly depends on the separation of the binary components, reaching its minimum at between 0.6 and 1.0 Einstein radius, where it is still of the order of 5% The Einstein radius is corresponding to few A.U. for typical Galactic microlensing scenarios. The rate for misinterpretation is higher for short microlensing events lasting up to few months and events with smaller maximum amplification. For fixed separation it increases for binaries with more extreme mass ratios. Problem of degeneracy in photometric light curve solution between binary lens and binary source microlensing events was studied on simulated data, and data observed by the PLANET collaboration. The fitting code BISCO using the PIKAIA genetic algorithm optimizing routine was written for optimizing binary-source microlensing light curves observed at different sites, in I, R and V photometric bands. Tests on simulated microlensing light curves show that BISCO is successful in finding the solution to a binary-source event in a very wide parameter space. Flux ratio method is suggested in this work for breaking degeneracy between binary-lens and binary-source photometric light curves. Models show that only a few additional data points in photometric V band, together with a full light curve in I band, will enable breaking the degeneracy. Very good data quality and dense data sampling, combined with accurate binary lens and binary source modeling, yielded the discovery of the lowest-mass planet discovered outside of the Solar System so far, OGLE-2005-BLG-390Lb, having only 5.5 Earth masses. This was the first observed microlensing event in which the degeneracy between a planetary binary-lens and an extreme flux ratio binary-source model has been successfully broken. For events OGLE-2003-BLG-222 and OGLE-2004-BLG-347, the degeneracy was encountered despite of very dense data sampling. From light curve modeling and stellar evolution theory, there was a slight preference to explain OGLE-2003-BLG-222 as a binary source event, and OGLE-2004-BLG-347 as a binary lens event. However, without spectra, this degeneracy cannot be fully broken. No planet was found so far around a white dwarf, though it is believed that Jovian planets should survive the late stages of stellar evolution, and that white dwarfs will retain planetary systems in wide orbits. We want to perform high precision astrometric observations of nearby white dwarfs in wide binary systems with red dwarfs in order to find planets around white dwarfs. We selected a sample of observing targets (WD-RD binary systems, not published yet), which can possibly have planets around the WD component, and modeled synthetic astrometric orbits which can be observed for these targets using existing and future astrometric facilities. Modeling was performed for the astrometric accuracy of 0.01, 0.1, and 1.0 mas, separation between WD and planet of 3 and 5 A.U., binary system separation of 30 A.U., planet masses of 10 Earth masses, 1 and 10 Jupiter masses, WD mass of 0.5M and 1.0 Solar masses, and distances to the system of 10, 20 and 30 pc. It was found that the PRIMA facility at the VLTI will be able to detect planets around white dwarfs once it is operating, by measuring the astrometric wobble of the WD due to a planet companion, down to 1 Jupiter mass. We show for the simulated observations that it is possible to model the orbits and find the parameters describing the potential planetary systems.
Variationen der stratosphärischen Residualzirkulation und ihr Einfluss auf die Ozonverteilung
(2006)
Die Residualzirkulation entspricht der mittleren Massenzirkulation und beschreibt die im zonalen Mittel stattfindenden meridionalen Transportprozesse. Die Variationen der Residualzirkulation bestimmen gemeinsam mit dem anthropogen verursachten Ozonabbau die jährlichen Schwankungen der Ozongesamtsäule im arktischen Frühling. In der vorliegenden Arbeit wird die Geschwindigkeit des arktischen Astes der Residualzirkulation aus atmosphärischen Daten gewonnen. Zu diesem Zweck wird das diabatische Absinken im Polarwirbel mit Hilfe von Trajektorienrechnungen bestimmt. Die vertikalen Bewegungen der Luftpakete können mit vertikalen Windfeldern oder entsprechend einem neuen Ansatz mit diabatischen Heizraten angetrieben werden. Die Eingabedaten stammen aus dem 45 Jahre langen Reanalyse-Datensatz des "European Centre for Medium Range Weather Forecast" (ECMWF). Außerdem kann für die Jahre ab 1984 die operationelle ECMWF-Analyse verwendet werden. Die Qualität und Robustheit der Heizraten- und Trajektorienrechnungen werden durch Sensitivitätsstudien und Vergleiche mit anderen Modellen untermauert. Anschließend werden umfangreiche Trajektorienensemble statistisch ausgewertet, um ein detailliertes, zeit- und höhenaufgelöstes Bild des diabatischen Absinkens zu ermitteln. In diesem Zusammenhang werden zwei Methoden entwickelt, um das Absinken gemittelt im Polarwirbel oder als Funktion der äquivalenten Breite zu bestimmen. Es wird gezeigt, dass es notwendig ist den Lagrangeschen auf Trajektorienrechnungen basierenden Ansatz zu verfolgen, da die einfachen Eulerschen Mittel Abweichungen zu den Lagrangeschen Vertikalgeschwindigkeiten aufweisen. Das wirbelgemittelte Absinken wird für einzelne Winter mit dem beobachteten Absinken langlebiger Spurengase und anderen Modellstudien verglichen. Der Vergleich zeigt, dass das Absinken basierend auf den vertikalen Windfeldern der ECMWF-Datensätze den Nettoluftmassentransport durch die Residualzirkulation sehr stark überschätzt. Der neue Ansatz basierend auf den Heizraten ergibt hingegen realistische Ergebnisse und wird aus diesem Grund für alle Rechnungen verwendet. Es wird erstmalig eine Klimatologie des diabatischen Absinkens über einen fast fünf Jahrzehnte umfassenden Zeitraum erstellt. Die Klimatologie beinhaltet das vertikal und zeitlich aufgelöste diabatische Absinken gemittelt über den gesamten Polarwirbel und Informationen über die räumliche Struktur des vertikalen Absinkens. Die natürliche Jahr-zu-Jahr Variabilität des diabatischen Absinkens ist sehr stark ausgeprägt. Es wird gezeigt, dass zwischen der ECMWF-Zeitreihe des diabatischen Absinkens und der Zeitreihe aus einem unabhängig analysierten Temperaturdatensatz hohe Korrelationen bestehen. Erstmals wird der Einfluss von Transportprozessen auf die Ozongesamtsäule im arktischen Frühling direkt quantifiziert. Es wird gezeigt, dass die Jahr-zu-Jahr Variabilität der Ozongesamtsäule im arktischen Frühling zu gleichen Anteilen durch die Variabilität der dynamischen Komponente und durch die Variabilität der chemischen Komponente beeinflusst wird. Die gefundenen Variabilitäten von diabatischem Absinken und Ozoneintrag in hohen Breiten werden mit der vertikalen Ausbreitung planetarer Wellen aus der Troposphäre in die Stratosphäre in Beziehung gesetzt.
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
The primary objective of this work was to develop a laser source for fundamental investigations in the field of laser – materials interactions. In particular it is supposed to facilitate the study of the influence of the temporal energy distribution such as the interaction between adjacent pulses on ablation processes. Therefore, the aim was to design a laser with a highly flexible and easily controllable temporal energy distribution. The laser to meet these demands is an SBS-laser with optional active mode-locking. The nonlinear reflectivity of the SBS-mirror leads to a passive Q-switching and issues ns-pulse bursts with µs spacing. The pulse train parameters such as pulse duration, pulse spacing, pulse energy and number of pulses within a burst can be individually adjusted by tuning the pump parameters and the starting conditions for the laser. Another feature of the SBS-reflection is phase conjugation, which leads to an excellent beam quality thanks to the compensation of phase distortions. Transverse fundamental mode operation and a beam quality better than 1.4 times diffraction limited can be maintained for average output powers of up to 10 W. In addition to the dynamics on a ns-timescale described above, a defined splitting up of each ns-pulse into a train of ps-pulses can be achieved by additional active mode-locking. This twofold temporal focussing of the intensity leads to single pulse energies of up to 2 mJ at pulse durations of approximately 400 ps which corresponds to a pulse peak power of 5 MW. While the pulse duration is of the same order of magnitude as those of other passively Q-switched lasers with simultaneous mode-locking, the pulse energy and pulse peak power exceeds the values of these systems found in the literature by an order of magnitude. To the best of my knowledge the laser presented here is the first implementation of a self-starting mode-locked SBS-laser oscillator. In order to gain a better understanding and control of the transient output of the laser two complementary numerical models were developed. The first is based on laser rate equations which are solved for each laser mode individually while the mode-locking dynamics are calculated from the resultant transient spectrum. The rate equations consider the mean photon densities in the resonator, therefore the propagation of the light inside the resonator is not properly displayed. The second model, in contrast, introduces a spatial resolution of the resonator and hence the propagation inside the resonator can more accurately be considered. Consequently, a mismatch between the loss modulation frequency and the resonator round trip time can be conceived. The model calculates all dynamics in the time domain and therefore the spectral influences such as the Stokes-shift have to be neglected. Both models achieve an excellent reproduction of the ns-dynamics that are generated by the SBS-Q-switch. Separately, each model fails to reproduce all aspects of the ps-dynamics of the SBS-laser in detail. This can be attributed to the complexity of the numerous physical processes involved in this system. But thanks to their complementary nature they provide a very useful tool for investigating the various influences on the dynamics of the mode-locked SBS-laser individually. These aspects can eventually be recomposed to give a complete picture of the mechanisms which govern the output dynamics. Among the aspects under scrutiny were in particular the start resonator quality which determines the starting condition for the SBS-Q-switch, the modulation depth of the AOM and the phonon lifetime as well as the Brillouin-frequency of the SBS-medium. The numerical simulations and the experiments have opened several doors inviting further investigations and promising a potential for further improvement of the experimental results: The results of the simulations in combination with the experimental results which determined the starting conditions for the simulations leave no doubt that the bandwidth generation can primarily be attributed to the SBS-Stokes-shift during the buildup of the Q-switch pulse. For each resonator round trip, bandwidth is generated by shifting a part of the revolving light in frequency. The magnitude of the frequency shift corresponds to the Brillouin-frequency which is a constant of the SBS material and amounts in the case of SF6 to 240 MHz. The modulation of the AOM merely provides an exchange of population between spectrally adjacent modes and therefore diminishes a modulation in the spectrum. By use of a material with a Brillouin-frequency in the GHz range the bandwidth generation can be considerably accelerated thereby shortening the pulse duration. Also, it was demonstrated that yet another nonlinear effect of the SBS can be exploited: If the phonon lifetime is short compared to the resonator round trip time we obtain a modulation in the SBS-reflectivity that supports the modulation of the AOM. The application of an external optical feedback by a conventional mirror turns out to be an alternative to the AOM in synchronizing the longitudinal resonator modes. The interesting feature about this system is that it is ― although highly complex in the physical processes and the temporal output dynamics ― very simple and inexpensive from a technical point of view. No expensive modulators and no control electronics are necessary. Finally, the numerical models constitute a powerful tool for the investigation of emission dynamics of complex laser systems on arbitrary timescales and can also display the spectral evolution of the laser output. In particular it could be demonstrated that differences in the results of the complementary models vanish for systems of lesser complexity.