Refine
Has Fulltext
- yes (11)
Document Type
- Master's Thesis (11) (remove)
Keywords
- Datenanalyse (2)
- Physik (2)
- data analysis (2)
- physics (2)
- Abwärme (1)
- Akustik (1)
- Anderson (1)
- Attraktorrekonstruktion (1)
- Ausbreitung (1)
- Bogoliubov theory (1)
Institute
- Institut für Physik und Astronomie (11) (remove)
Aufgrund der Bedeutung von Experimenten im physikalischen Erkenntnisprozess, sind diese ein wesentlicher Bestandteil des Physikunterrichts. Um den Einsatz von Experimenten im Physikunterricht zu fördern, sind kompetenzorientiertes Experimentieren und die Reflexion des Einsatzes von Experimenten wichtige Ziele in Lehrkräftebildungsprogrammen. Ablaufmodelle für kompetenzorientiertes Experimentieren unterscheiden typischerweise Phasen der Fragen- und Hypothesenentwicklung, der Planung, der Erforschung und der Schlussfolgerungen. Es ist allerdings unklar, auf welche Weise angehende Physiklehrkräfte Aspekte des kompetenzorientierten Experimentierens in ihrem Unterricht in schulpraktischen Ausbildungsphasen einsetzen, auf welche Weise sie solche Unterrichtsversuche mit Experimentierbezug reflektieren und wie strukturiert (im Sinne der Ablaufmodelle) sie dabei vorgehen.
In der vorliegenden Studie wurde deshalb untersucht, auf welche Weise Praxissemesterstudierende Experimentierprozesse in ihren Unterrichtsversuchen reflektieren. Hierfür wurde betrachtet, zu welchen Anteilen die Experimentierphasen in den Reflexionen adressiert werden. Um weiterhin herauszufinden, mit welcher Qualität die Experimentplanung reflektiert wird und inwiefern sich Vorstrukturierung für die Planungsphase zeigt, wurde diese differenzierter betrachtet. Auf Basis empirischer Vorarbeiten wurde vermutet, dass Fragenentwicklung, Hypothesenbildung und Experimentplanung seltener thematisiert werden als die anderen Teilkompetenzen und dass die Planungsphase hauptsächlich stark vorstrukturierte Elemente enthält, statt den Lernenden Freiräume für selbstständige Planungen zu lassen.
Zur Untersuchung der Fragestellung wurden Kodiermanuale zur Erfassung experimentierbezogener Kompetenzen in schriftlichen Reflexionen entwickelt und validiert. Analysiert wurden 40 Reflexionstexte von 14 Studierenden des Physik-Lehramts im Praxissemester an der Universität Potsdam. Als Untersuchungsmethode wurde die qualitative Inhaltsanalyse genutzt. Die Texte wurden bezüglich der Umsetzung eines Reflexionsmodells und auf das Vorkommen der Teilkompetenzen des Experimentierzyklus untersucht.
Die Ergebnisse bestätigten das geringe Vorkommen der Fragenentwicklung und Hypothesenbildung sowie die tendenziell geschlossenen Planungsinhalte. Zudem konnte festgestellt werden, dass die Planungsphase eher oberflächlich reflektiert und vor allem Arbeitsaufträge wiedergegeben wurden. Allgemein zeigten sich hauptsächlich beschreibende Tendenzen in den Reflexionen und eher wenige Alternativen und Konsequenzen. Aus den Ergebnissen werden Implikationen für die Lehrkräftebildung im Fach Physik abgeleitet. Um die Reflexionskompetenz der angehenden Lehrkräfte zu fördern, sind Hilfestellungen während des Reflexionsprozesses und eine inhaltliche Rückmeldung notwendig. Des Weiteren sollten die angehenden Lehrkräfte für eine ausgewogenere Förderung der Teilkompetenzen in ihrem Unterricht sensibilisiert werden.
The forcing from the anthropogenic heat flux (AHF), i.e. the dissipation of primary energy consumed by the human civilisation, produces a direct climate warming. Today, the globally averaged AHF is negligibly small compared to the indirect forcing from greenhouse gas emissions. Locally or regionally, though, it has a significant impact. Historical observations show a constant exponential growth of worldwide energy production. A continuation of this trend might be fueled or even amplified by the exploration of new carbon-free energy sources like fusion power. In such a scenario, the impacts of the AHF become a relevant factor for anthropogenic post-greenhouse gas climate change on the global scale, as well.
This master thesis aims at estimating the climate impacts of such a growing AHF forcing. In the first part of this work, the AHF is built into simple and conceptual, zero- and one-dimensional Energy Balance Models (EBMs), providing quick order of magnitude estimations of the temperature impact. In the one-dimensional EBM, the ice-albedo feedback from enhanced ice melting due to the AHF increases the temperature impact significantly compared to the zero-dimensional EBM.
Additionally, the forcing is built into a climate model of intermediate complexity, CLIMBER-3α. This allows for the investigation of the effect of localised AHF and gives further insights into the impact of the AHF on processes like the ocean heat uptake, sea ice and snow pattern changes
and the ocean circulation.
The global mean temperature response from the AHF today is of the order of 0.010 − 0.016 K in all reasonable model configurations tested. A transient tenfold increase of this forcing heats up the Earth System additionally by roughly 0.1 − 0.2 K in the presented models. Further growth
can also affect the tipping probability of certain climate elements.
Most renewable energy sources do not or only partially contribute to the AHF forcing as the energy from these sources dissipates anyway. Hence, the transition to a (carbon-free) renewable energy mix, which, in particular, does not rely on nuclear power, eliminates the local and global climate impacts from the increasing AHF forcing, independent of the growth of energy production.
The subject of the present thesis is the one-dimensional Bose gas. Since long-rang order is destroyed by infra-red fluctuations in one dimension, only the formation of a quasi-condensate is possible, which exhibits suppressed density fluctuations, but whose phase fluctuates strongly. It is shown that modified mean-field theories based on a symmetry-breaking approach can even characterise phase coherence properties of such a quasi-condensate properly. A correct description of the transition from the degenerate ideal Bose gas to the quasi-condensate, which is a smooth cross-over rather than a phase transition, is not possible though. Basic conditions for the applicability of the theories are not fulfilled in this regime, such that the existence of a critical point is predicted.
The theories are compared on the basis of their excitation sprectum, equation of state, density fluctuations and related correlation functions. High-temperature expansions of the corresponding integrals are derived analytically for the numerical evaluation of the self-consistent integral equations. Apart from that, the Stochastic Gross-Pitaevskii equation (SGPE), a non-linear Langevin equation, is analysed numerically by means of Monte-Carlo simulations and the results are compared to those of the mean-field theories. In this context, a lot of attention is payed to the appropriate choice of the parameters. The simulations prove that the SGPE is capable of describing the cross-over properly, but highlight the limitations of the widely used local density approximation as well.
In dieser Arbeit werden die Effekte der Synchronisation nichtlinearer, akustischer Oszillatoren am Beispiel zweier Orgelpfeifen untersucht. Aus vorhandenen, experimentellen Messdaten werden die typischen Merkmale der Synchronisation extrahiert und dargestellt. Es folgt eine detaillierte Analyse der Übergangsbereiche in das Synchronisationsplateau, der Phänomene während der Synchronisation, als auch das Austreten aus der Synchronisationsregion beider Orgelpfeifen, bei verschiedenen Kopplungsstärken. Die experimentellen Befunde werfen Fragestellungen nach der Kopplungsfunktion auf. Dazu wird die Tonentstehung in einer Orgelpfeife untersucht. Mit Hilfe von numerischen Simulationen der Tonentstehung wird der Frage nachgegangen, welche fluiddynamischen und aero-akustischen Ursachen die Tonentstehung in der Orgelpfeife hat und inwiefern sich die Mechanismen auf das Modell eines selbsterregten akustischen Oszillators abbilden lässt. Mit der Methode des Coarse Graining wird ein Modellansatz formuliert.
This thesis covers the topic ”Thinning and Turbulence in Aqueous Films”. Experimental studies in two-dimensional systems gained an increasing amount of attention during the last decade. Thin liquid films serve as paradigms of atmospheric convection, thermal convection in the Earth’s mantle or turbulence in magnetohydrodynamics. Recent research on colloids, interfaces and nanofluids lead to advances in the developtment of micro-mixers (lab-on-a-chip devices). In this project a detailed description of a thin film experiment with focus on the particular surface forces is presented. The impact of turbulence on the thinning of liquid films which are oriented parallel to the gravitational force is studied. An experimental setup was developed which permits the capturing of thin film interference patterns under controlled surface and atmospheric conditions. The measurement setup also serves as a prototype of a mixer on the basis of thermally induced turbulence in liquid thin films with thicknesses in the nanometer range. The convection is realized by placing a cooled copper rod in the center of the film. The temperature gradient between the rod and the atmosphere results in a density gradient in the liquid film, so that different buoyancies generate turbulence. In the work at hand the thermally driven convection is characterized by a newly developed algorithm, named Cluster Imaging Velocimetry (CIV). This routine determines the flow relevant vector fields (velocity and deformation). On the basis of these insights the flow in the experiment was investigated with respect to its mixing properties. The mixing characteristics were compared to theoretical models and mixing efficiency of the flow scheme calculated. The gravitationally driven thinning of the liquid film was analyzed under the influence of turbulence. Strong shear forces lead to the generation of ultra-thin domains which consist of Newton black film. Due to the exponential expansion of the thin areas and the efficient mixing, this two-phase flow rapidly turns into the convection of only ultra-thin film. This turbulence driven transition was observed and quantified for the first time. The existence of stable convection in liquid nanofilms was proven for the first time in the context of this work.
This thesis investigates the Casimir effect between plates made of normal and superconducting metals over a broad range of temperatures, as well as the Casimir-Polder interaction of an atom to such a surface. Numerical and asymptotical calculations have been the main tools in order to do so. The optical properties of the surfaces are described by dielectric functions or optical conductivities, which are reviewed for common models and have been analyzed with special weight on distributional properties and causality. The calculation of the Casimir energy between two normally conducting plates (cavity) is reviewed and previous work on the contribution to the Casimir energy due to the surface plasmons, present in all metallic cavities, has been generalized to finite temperatures for the first time. In the field of superconductivity, a new analytical continuation of the BCS conductivity to to purely imaginary frequencies has been obtained both inside and outside the extremely dirty limit of vanishing mean free path. The Casimir free energy calculated from this description was shown to coincide well with the values obtained from the two fluid model of superconductivity in certain regimes of the material parameters. The Casimir entropy in a superconducting cavity fulfills the third law of thermodynamics and features a characteristic discontinuity at the phase transition temperature. These effects were equally encountered in the Casimir-Polder interaction of an atom with a superconducting wall. The magnetic dipole coupling of an atom to a metal was shown to be highly sensible to dissipation and especially to the surface currents. This leads to a strong quenching of the magnetic Casimir-Polder energy at finite temperature. Violations of the third law of thermodynamics are encountered in special models, similar to phenomena in the Casimir-effect between two plates, that are debated controversely. None of these effects occurs in the analog electric dipole interaction. The results of this work suggest to reestablish the well-known plasma model as the low temperature limit of a superconductor as in London theory rather than use it for the description of normal metals. Superconductors offer the opportunity to control the dissipation of surface currents to a great extent. This could be used to access experimentally the low frequency optical response of metals, which is strongly connected to the thermal Casimir-effect. Here, differently from corresponding microwave experiments, energy and momentum are independent quantities. A measurement of the total Casimir-Polder interaction of atoms with superconductors seems to be in reach in today’s microchip-based atom-traps and the contribution due to magnetic coupling might be accessed by spectroscopic techniques
In this thesis, the properties of nonlinear disordered one dimensional lattices is investigated. Part I gives an introduction to the phenomenon of Anderson Localization, the Discrete Nonlinear Schroedinger Equation and its properties as well as the generalization of this model by introducing the nonlinear index α. In Part II, the spreading behavior of initially localized states in large, disordered chains due to nonlinearity is studied. Therefore, different methods to measure localization are discussed and the structural entropy as a measure for the peak structure of probability distributions is introduced. Finally, the spreading exponent for several nonlinear indices is determined numerically and compared with analytical approximations. Part III deals with the thermalization in short disordered chains. First, the term thermalization and its application to the system in use is explained. Then, results of numerical simulations on this topic are presented where the focus lies especially on the energy dependence of the thermalization properties. A connection with so-called breathers is drawn.
Complex network theory provides an elegant and powerful framework to statistically investigate the topology of local and long range dynamical interrelationships, i.e., teleconnections, in the climate system. Employing a refined methodology relying on linear and nonlinear measures of time series analysis, the intricate correlation structure within a multivariate climatological data set is cast into network form. Within this graph theoretical framework, vertices are identified with grid points taken from the data set representing a region on the the Earth's surface, and edges correspond to strong statistical interrelationships between the dynamics on pairs of grid points. The resulting climate networks are neither perfectly regular nor completely random, but display the intriguing and nontrivial characteristics of complexity commonly found in real world networks such as the internet, citation and acquaintance networks, food webs and cortical networks in the mammalian brain. Among other interesting properties, climate networks exhibit the "small-world" effect and possess a broad degree distribution with dominating super-nodes as well as a pronounced community structure. We have performed an extensive and detailed graph theoretical analysis of climate networks on the global topological scale focussing on the flow and centrality measure betweenness which is locally defined at each vertex, but includes global topological information by relying on the distribution of shortest paths between all pairs of vertices in the network. The betweenness centrality field reveals a rich internal structure in complex climate networks constructed from reanalysis and atmosphere-ocean coupled general circulation model (AOGCM) surface air temperature data. Our novel approach uncovers an elaborately woven meta-network of highly localized channels of strong dynamical information flow, that we relate to global surface ocean currents and dub the backbone of the climate network in analogy to the homonymous data highways of the internet. This finding points to a major role of the oceanic surface circulation in coupling and stabilizing the global temperature field in the long term mean (140 years for the model run and 60 years for reanalysis data). Carefully comparing the backbone structures detected in climate networks constructed using linear Pearson correlation and nonlinear mutual information, we argue that the high sensitivity of betweenness with respect to small changes in network structure may allow to detect the footprints of strongly nonlinear physical interactions in the climate system. The results presented in this thesis are thoroughly founded and substantiated using a hierarchy of statistical significance tests on the level of time series and networks, i.e., by tests based on time series surrogates as well as network surrogates. This is particularly relevant when working with real world data. Specifically, we developed new types of network surrogates to include the additional constraints imposed by the spatial embedding of vertices in a climate network. Our methodology is of potential interest for a broad audience within the physics community and various applied fields, because it is universal in the sense of being valid for any spatially extended dynamical system. It can help to understand the localized flow of dynamical information in any such system by combining multivariate time series analysis, a complex network approach and the information flow measure betweenness centrality. Possible fields of application include fluid dynamics (turbulence), plasma physics and biological physics (population models, neural networks, cell models). Furthermore, the climate network approach is equally relevant for experimental data as well as model simulations and hence introduces a novel perspective on model evaluation and data driven model building. Our work is timely in the context of the current debate on climate change within the scientific community, since it allows to assess from a new perspective the regional vulnerability and stability of the climate system while relying on global and not only on regional knowledge. The methodology developed in this thesis hence has the potential to substantially contribute to the understanding of the local effect of extreme events and tipping points in the earth system within a holistic global framework.
Recently, several faint ringlets in the Saturnian ring system were found to maintain a peculiar orientation relative to Sun. The Encke gap ringlets as well as the ringlet in the outer rift of the Cassini division were found to have distinct spatial displacements of several tens of kilometers away from Saturn towards Sun, referred to as heliotropicity (Hedman et al., 2007). This is quite exceptional, since dynamically one would expect eccentric features in the Saturnian rings to precess around Saturn over periods of months. In our study we address this exceptional behavior by investigating the dynamics of circumplanetary dust particles with sizes in the range of 1-100 µm. These small particles are perturbed by non-gravitational forces, in particular, solar radiation pres- sure, Lorentz force, and planetary oblateness, on time-scales of the order of days. The combined influences of these forces cause periodical evolutions of grains’ orbital ec- centricities as well as precession of their pericenters, which can be shown by secular perturbation theory. We show that this interaction results in a stationary eccentric ringlet, oriented with its apocenter towards the Sun, which is consistent with obser- vational findings. By applying this heliotropic dynamics to the central Encke gap ringlet, we can give a limit for the expected smallest grain size in the ringlet of about 8.7 microns, and constrain the minimal lifetime to lie in the order of months. Furthermore, our model matches fairly well the observed ringlet eccentricity in the Encke gap, which supports recent estimates on the size distribution of the ringlet material (Hedman et al., 2007). The ringlet-width however, that results from our modeling based on heliotropic dynamics, slightly overestimates the observed confined ringlet-width by a factor of 3 to 10, depending on the width-measure being used. This is indicative for mechanisms, not included in the heliotropic model, which potentially confine the ringlet to its observed width, including shepherding and scattering by embedded moonlets in the ringlet region. Based on these results, early investigations (Cuzzi et al., 1984, Spahn and Wiebicke, 1989, Spahn and Sponholz, 1989), and recent work that has been published on the F ring (Murray et al., 2008) - to which the Encke gap ringlets are found to share similar morphological structures - we model the maintenance of the central ringlet by embedded moonlets. These moonlets, believed to have sizes of hundreds of meters across, release material into space, which is eroded by micrometeoroid bombardment (Divine, 1993). We further argue that Pan - one of Saturn’s moons, which shares its orbit with the central ringlet of the Encke gap - is a rather weak source of ringlet material that efficiently confines the ringlet sources (moonlets) to move on horseshoe-like orbits. Moreover, we suppose that most of the narrow heliotropic ringlets are fed by a moonlet population, which is held together by the largest member to move on horseshoe-like orbits. Modeling the equilibrium between particle source and sinks with a primitive balance equation based on photometric observations (Porco et al., 2005), we find the minimal effective source mass of the order of 3 · 10-2MPan, which is needed to keep the central ringlet from disappearing.
Phase Space Reconstruction is a method that allows to reconstruct the phase space of a system using only an one dimensional time series as input. It can be used for calculating Lyapunov-exponents and detecting chaos. It helps to understand complex dynamics and their behavior. And it can reproduce datasets which were not measured. There are many different methods which produce correct reconstructions such as time-delay, Hilbert-transformation, derivation and integration. The most used one is time-delay but all methods have special properties which are useful in different situations. Hence, every reconstruction method has some situations where it is the best choice. Looking at all these different methods the questions are: Why can all these different looking methods be used for the same purpose? Is there any connection between all these functions? The answer is found in the frequency domain : Performing a Fourier transformation all these methods getting a similar shape: Every presented reconstruction method can be described as a multiplication in the frequency domain with a frequency-depending reconstruction function. This structure is also known as a filter. From this point of view every reconstructed dimension can be seen as a filtered version of the measured time series. It contains the original data but applies just a new focus: Some parts are amplified and other parts are reduced. Furthermore I show, that not every function can be used for reconstruction. In the thesis three characteristics are identified, which are mandatory for the reconstruction function. Under consideration of these restrictions one gets a whole bunch of new reconstruction functions. So it is possible to reduce noise within the reconstruction process itself or to use some advantages of already known reconstructions methods while suppressing unwanted characteristics of it.