Refine
Year of publication
- 2011 (187) (remove)
Document Type
- Article (150)
- Doctoral Thesis (26)
- Review (3)
- Monograph/Edited Volume (2)
- Other (2)
- Postprint (2)
- Habilitation Thesis (1)
- Master's Thesis (1)
Is part of the Bibliography
- yes (187)
Keywords
Institute
- Institut für Physik und Astronomie (187) (remove)
To asymptotic complete scattering systems {M(+) + V, M(+)} on H(+) := L(2)(R(+), K, d lambda), where M(+) is the multiplication operator on H(+) and V is a trace class operator with analyticity conditions, a decay semigroup is associated such that the spectrum of the generator of this semigroup coincides with the set of all resonances (poles of the analytic continuation of the scattering matrix into the lower half plane across the positive half line), i.e. the decay semigroup yields a "time-dependent" characterization of the resonances. As a counterpart a "spectral characterization" is mentioned which is due to the "eigenvalue-like" properties of resonances.
The origin of Galactic cosmic rays is a century-long puzzle. Indirect evidence points to their acceleration by supernova shockwaves, but we know little of their escape from the shock and their evolution through the turbulent medium surrounding massive stars. Gamma rays can probe their spreading through the ambient gas and radiation fields. The Fermi Large Area Telescope (LAT) has observed the star-forming region of Cygnus X. The 1- to 100-gigaelectronvolt images reveal a 50-parsec-wide cocoon of freshly accelerated cosmic rays that flood the cavities carved by the stellar winds and ionization fronts from young stellar clusters. It provides an example to study the youth of cosmic rays in a superbubble environment before they merge into the older Galactic population.
Context. Extrapolations of solar photospheric vector magnetograms into three-dimensional magnetic fields in the chromosphere and corona are usually done under the assumption that the fields are force-free. This condition is violated in the photosphere itself and a thin layer in the lower atmosphere above. The field calculations can be improved by preprocessing the photospheric magnetograms. The intention here is to remove a non-force-free component from the data.
Aims. We compare two preprocessing methods presently in use, namely the methods of Wiegelmann et al. (2006, Sol. Phys., 233, 215) and Fuhrmann et al. (2007, A&A, 476, 349).
Methods. The two preprocessing methods were applied to a vector magnetogram of the recently observed active region NOAA AR 10 953. We examine the changes in the magnetogram effected by the two preprocessing algorithms. Furthermore, the original magnetogram and the two preprocessed magnetograms were each used as input data for nonlinear force-free field extrapolations by means of two different methods, and we analyze the resulting fields.
Results. Both preprocessing methods managed to significantly decrease the magnetic forces and magnetic torques that act through the magnetogram area and that can cause incompatibilities with the assumption of force-freeness in the solution domain. The force and torque decrease is stronger for the Fuhrmann et al. method. Both methods also reduced the amount of small-scale irregularities in the observed photospheric field, which can sharply worsen the quality of the solutions. For the chosen parameter set, the Wiegelmann et al. method led to greater changes in strong-field areas, leaving weak-field areas mostly unchanged, and thus providing an approximation of the magnetic field vector in the chromosphere, while the Fuhrmann et al. method weakly changed the whole magnetogram, thereby better preserving patterns present in the original magnetogram. Both preprocessing methods raised the magnetic energy content of the extrapolated fields to values above the minimum energy, corresponding to the potential field. Also, the fields calculated from the preprocessed magnetograms fulfill the solenoidal condition better than those calculated without preprocessing.
Quantum theory (QT) is usually formulated in terms of abstract mathematical postulates involving Hilbert spaces, state vectors and unitary operators. In this paper, we show that the full formalism of QT can instead be derived from five simple physical requirements, based on elementary assumptions regarding preparations, transformations and measurements. This is very similar to the usual formulation of special relativity, where two simple physical requirements-the principles of relativity and light speed invariance-are used to derive the mathematical structure of Minkowski space-time. Our derivation provides insights into the physical origin of the structure of quantum state spaces (including a group-theoretic explanation of the Bloch ball and its three dimensionality) and suggests several natural possibilities to construct consistent modifications of QT.
Cavitation at the solid surface normally begins with nucleation, in which defects or assembled molecules located at a liquid-solid interface act as nucleation centers and are actively involved in the evolution of cavitation bubbles. Here, we propose a simple approach to evaluate the behavior of cavitation bubbles formed under high intensity ultrasound (20 kHz, 51.3 W cm (2)) at solid surfaces, based on sonication of patterned substrates with a small roughness (less than 3 nm) and controllable surface energy. A mixture of octadecylphosphonic acid (ODTA) and octadecanethiol (ODT) was stamped on the Si wafer coated with different thicknesses of an aluminium layer (20-500 nm). We investigated the growth mechanism of cavitation bubble nuclei and the evolution of individual pits (defects) formed under sonication on the modified surface. A new activation behavior as a function of Al thickness, sonication time, ultrasonic power and temperature is reported. In this process cooperativity is introduced, as initially formed pits further reduce the energy to form bubbles. Furthermore, cavitation on the patterns is a controllable process, where up to 40-50 min of sonication time only the hydrophobic areas are active nucleation sites. This study provides a convincing proof of our theoretical approach on nucleation.
It has recently been discovered that for certain rates of mode-exchange collisions analytic solutions can be found for a Hamiltonian describing the two-mode Bose-Einstein condensate. We proceed to study the behaviour of the system using perturbation theory if the coupling constants only approximately match these parameter constraints. We find that the model is robust to such perturbations. We study the effects of degeneracy on the perturbations and find that the induced changes differ greatly from the non-degenerate case. We also model inelastic collisions that result in particle loss or condensate decay as external perturbations and use this formalism to examine the effects of three-body recombination and background collisions.
The discovery of a plume of water vapour and ice particles emerging from warm fractures ('tiger stripes') in Saturn's small, icy moon Enceladus(1-6) raised the question of whether the plume emerges from a subsurface liquid source(6-8) or from the decomposition of ice(9-12). Previous compositional analyses of particles injected by the plume into Saturn's diffuse E ring have already indicated the presence of liquid water(8), but the mechanisms driving the plume emission are still debated(13). Here we report an analysis of the composition of freshly ejected particles close to the sources. Salt-rich ice particles are found to dominate the total mass flux of ejected solids (more than 99 per cent) but they are depleted in the population escaping into Saturn's E ring. Ice grains containing organic compounds are found to be more abundant in dense parts of the plume. Whereas previous Cassini observations were compatible with a variety of plume formation mechanisms, these data eliminate or severely constrain non-liquid models and strongly imply that a salt-water reservoir with a large evaporating surface(7,8) provides nearly all of the matter in the plume.
A soft X-ray approach to electron-phonon interactions beyond the Born-Oppenheimer approximation
(2011)
With modern soft X-ray methods, the whole field of electron-phonon interactions becomes accessible directly in the ultrafast time domain with ultrashort pulsed X-ray sources, as well as in the energy domain through modern highly resolving spectrometers. The well-known core-hole clock approach plays an intermediate role, resolving energetic and temporal features at the same time. In this perspective paper, we review several experiments to illustrate the modern advances in the selective study of electron-phonon interactions as fundamentally determining ingredients for materials properties. We present the different complementary approaches that can be taken with soft X-ray methods to conquer this field beyond the Born-Oppenheimer approximation.
We establish a link between unitary relaxation dynamics after a quench in closed many-body systems and the entanglement in the energy eigenbasis. We find that even if reduced states equilibrate, they can have memory on the initial conditions even in certain models that are far from integrable. We show that in such situations the equilibrium states are still described by a maximum entropy or generalized Gibbs ensemble, regardless of whether a model is integrable or not, thereby contributing to a recent debate. In addition, we discuss individual aspects of the thermalization process, comment on the role of Anderson localization, and collect and compare different notions of integrability.
We investigate the origin and physical properties of O vi absorbers at low redshift (z = 0.25) using a subset of cosmological, hydrodynamical simulations from the OverWhelmingly Large Simulations (OWLS) project. Intervening O vi absorbers are believed to trace shock-heated gas in the warm-hot intergalactic medium (WHIM) and may thus play a key role in the search for the missing baryons in the present-day Universe. When compared to observations, the predicted distributions of the different O vi line parameters (column density, Doppler parameter, rest equivalent width W-r) from our simulations exhibit a lack of strong O vi absorbers, a discrepancy that has also been found by Oppenheimer & Dave. This suggests that physical processes on subgrid scales (e.g. turbulence) may strongly influence the observed properties of O vi systems. We find that the intervening O vi absorption arises mainly in highly metal enriched (10-1 < Z/Z(circle dot) less than or similar to 1) gas at typical overdensities of 1 < /<<>> less than or similar to 102. One-third of the O vi absorbers in our simulation are found to trace gas at temperatures T < 105 K, while the rest arises in gas at higher temperatures, most of them around T = 105.3 +/- 0.5 K. These temperatures are much higher than inferred by Oppenheimer & Dave, probably because that work did not take the suppression of metal-line cooling by the photoionizing background radiation into account. While the O vi resides in a similar region of (, T)-space as much of the shock-heated baryonic matter, the vast majority of this gas has a lower metal content and does not give rise to detectable O vi absorption. As a consequence of the patchy metal distribution, O vi absorbers in our simulations trace only a very small fraction of the cosmic baryons (< 2 per cent) and the cosmic metals. Instead, these systems presumably trace previously shock-heated, metal-rich material from galactic winds that is now mixing with the ambient gas and cooling. The common approach of comparing O vi and H i column densities to estimate the physical conditions in intervening absorbers from QSO observations may be misleading, as most of the H i (and most of the gas mass) is not physically connected with the high-metallicity patches that give rise to the O vi absorption.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
Analysis of the operating characteristics of a dielectric elastomer actuator (DEA) submount for the high-precision positioning of optical components in one dimension is presented. Precise alignment of a single-mode fiber is demonstrated and variation of the sensitivity of the submount motion by changing the bias voltage is confirmed. A comparison of the performance of the DEA submount with a piezoelectric alignment stage is made, which demonstrates that DEAs could present a very attractive, low-cost alternative to currently used manual technologies in overcoming the hurdle of expensive packaging of single-mode optical components.
The Great Nebula in Carina provides an exceptional view into the violent massive star formation and feedback that typifies giant H II regions and starburst galaxies. We have mapped the Carina star-forming complex in X-rays, using archival Chandra data and a mosaic of 20 new 60 ks pointings using the Chandra X-ray Observatory's Advanced CCD Imaging Spectrometer, as a testbed for understanding recent and ongoing star formation and to probe Carina's regions of bright diffuse X-ray emission. This study has yielded a catalog of properties of > 14,000 X-ray point sources;> 9800 of them have multiwavelength counterparts. Using Chandra's unsurpassed X-ray spatial resolution, we have separated these point sources from the extensive, spatially-complex diffuse emission that pervades the region; X-ray properties of this diffuse emission suggest that it traces feedback from Carina's massive stars. In this introductory paper, we motivate the survey design, describe the Chandra observations, and present some simple results, providing a foundation for the 15 papers that follow in this special issue and that present detailed catalogs, methods, and science results.
The dynamical structure of genetic networks determines the occurrence of various biological mechanisms, such as cellular differentiation. However, the question of how cellular diversity evolves in relation to the inherent stochasticity and intercellular communication remains still to be understood. Here, we define a concept of stochastic bifurcations suitable to investigate the dynamical structure of genetic networks, and show that under stochastic influence, the expression of given proteins of interest is defined via the probability distribution of the phase variable, representing one of the genes constituting the system. Moreover, we show that under changing stochastic conditions, the probabilities of expressing certain concentration values are different, leading to different functionality of the cells, and thus to differentiation of the cells in the various types.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
We use the Kelvin probe method to study the energy-level alignment of four conjugated polymers deposited on various electrodes. Band bending is observed in all polymers when the substrate work function exceeds critical values. Through modeling, we show that the band bending is explained by charge transfer from the electrodes into a small density of states that extends several hundred meV into the band gap. The energetic spread of these states is correlated with charge-carrier mobilities, suggesting that the same states also govern charge transport in the bulk of these polymers.
Cellular polypropylene (PP) ferroelectrets combine a large piezoelectricity with mechanical flexibility and elastic compliance. Their charging process represents a series of dielectric barrier discharges (DBDs) that generate a cold plasma with numerous active species and thus modify the inner polymer surfaces of the foam cells. Both the threshold for the onset of DBDs and the piezoelectricity of ferroelectrets are sensitive to repeated DBDs in the voids. It is found that the threshold voltage is approximately halved and the charging efficiency is clearly improved after only 10(3) DBD cycles. However, plasma modification of the inner surfaces from repeated DBDs deteriorates the chargeability of the voids, leading to a significant reduction of the piezoelectricity in ferroelectrets. After a significant waiting period, the chargeability of previously fatigued voids shows a partial recovery. The plasma modification is, however, detrimental to the stability of the deposited charges and thus also of the macroscopic dipoles and of the piezoelectricity. Fatigue from only 10(3) DBD cycles already results in significantly less stable piezoelectricity in cellular PP ferroelectrets. The fatigue rate as a function of the number of voltage cycles follows a stretched exponential. Fatigue from repeated DBDs can be avoided if most of the gas molecules inside the voids are removed via a suitable evacuation process.
We show how the spontaneous emission rate of an excited two-level atom placed in a trapped Bose-Einstein condensate of ground-state atoms is enhanced by bosonic stimulation. This stimulation depends on the overlap of the excited matter-wave packet with the macroscopically occupied condensate wave function, and provides a probe of the spatial coherence of the Bose gas. The effect can be used to amplify the distance-dependent decay rate of an excited atom near an interface.
Breakdown threshold of dielectric barrier discharges in ferroelectrets where Paschen's law fails
(2011)
The piezoelectric activity of charged cellular foams (so-called ferroelectrets) is compared against simulations based on a multi-layer electromechanical model and Townsend's model of Paschen breakdown, with the distribution of void heights determined from scanning electron micrographs. While the calculated space charge hysteresis curves are in good agreement with experimental data, the onset of piezoelectric activity is observed at significantly higher electric fields than predicted by Paschen's law. One likely explanation is that the commonly accepted Paschen curve for electric breakdown in air poorly describes the critical electric field for dielectric barrier discharges in micrometer-size cavities.
A new strategy for the synthesis of high permittivity polymer composites is demonstrated based on well-defined spatial distribution of ultralow amounts of conductive nanoparticles. The spatial distribution Was realized by immobilizing Cu nanoparticles within the pore system of Alia microspheres, preventing direct contact between individual Cu particles. Both Cu-loaded and unloaded silica microspheres were-then used as fillers in polymer composites prepared with thermoplastic SEBS rubber is the matrix. With a metallic Cu content of about 0.26 vol % In the compoilte, a relative increase of 94% In real permittivity was obtained. No Cu-induced relaxations were observed in the dielectric spectrum within the studied frequency range of 0.1 Hz to 1 MHz. When related to the amount of conductive nanoparticles, the obtained composites achieve the highest broad spectrum enhancement of permittivity ever reported for a polymer based composite.
We determined experimentally the complex transient optical dielectric function of a well-characterized polyelectrolyte/gold-nanoparticle composite system over a broad spectral range upon short pulse laser excitation by simultaneously measuring the time-dependent reflectance and transmittance of white light pulses with femtosecond pump-probe spectroscopy. We extracted directly the ultrafast changes in the real and imaginary parts of the effective dielectric function, epsilon(eff)(r) (omega,t)and epsilon(eff)(i) (omega,t), from the experiment. This complete experimental set of information on the time-dependent complex dielectric function challenges theories modeling the transient dielectric function of gold particles and the effective medium.
The maximum cosmic-ray energy achievable by acceleration by a relativistic blast wave is derived. It is shown that forward shocks from long gamma-ray bursts (GRBs) in the interstellar medium accelerate protons to large enough energies, and have a sufficient energy budget, to produce the Galactic cosmic-ray component just below the ankle at 4 x 10(18) eV, as per an earlier suggestion. It is further argued that, were extragalactic long GRBs responsible for the component above the ankle as well, the occasional Galactic GRB within the solar circle would contribute more than the observational limits on the outward flux from the solar circle, unless an avoidance scenario, such as intermittency and/or beaming, allows the present-day local flux to be less than 10(-3) of the average. Difficulties with these avoidance scenarios are noted.
The Chandra Carina Complex contains 200 known O- and B-type stars. The Chandra survey detected 68 of the 70 O stars and 61 of 127 known B0-B3 stars. We have assembled a publicly available optical/X-ray database to identify OB stars that depart from the canonical L-X/L-bol relation or whose average X-ray temperatures exceed 1 keV. Among the single O stars with high kT we identify two candidate magnetically confined wind shock sources: Tr16-22, O8.5 V, and LS 1865, O8.5 V((f)). The O4 III(fc) star HD 93250 exhibits strong, hard, variable X-rays, suggesting that it may be a massive binary with a period of > 30 days. The visual O2 If* binary HD 93129A shows soft 0.6 keV and hard 1.9 keV emission components, suggesting embedded wind shocks close to the O2 If* Aa primary and colliding wind shocks between Aa and Ab. Of the 11 known O-type spectroscopic binaries, the long orbital-period systems HD 93343, HD 93403, and QZ Car have higher shock temperatures than short-period systems such as HD 93205 and FO 15. Although the X-rays from most B stars may be produced in the coronae of unseen, low-mass pre-main-sequence companions, a dozen B stars with high L-X cannot be explained by a distribution of unseen companions. One of these, SS73 24 in the Treasure Chest cluster, is a new candidate Herbig Be star.
The Casimir-Polder interaction between a single neutral atom and a nearby surface, arising from the (quantum and thermal) fluctuations of the electromagnetic field, is a cornerstone of cavity quantum electrodynamics (cQED), and theoretically well established. Recently, Bose-Einstein condensates (BECs) of ultracold atoms have been used to test the predictions of cQED. The purpose of the present thesis is to upgrade single-atom cQED with the many-body theory needed to describe trapped atomic BECs. Tools and methods are developed in a second-quantized picture that treats atom and photon fields on the same footing. We formulate a diagrammatic expansion using correlation functions for both the electromagnetic field and the atomic system. The formalism is applied to investigate, for BECs trapped near surfaces, dispersion interactions of the van der Waals-Casimir-Polder type, and the Bosonic stimulation in spontaneous decay of excited atomic states. We also discuss a phononic Casimir effect, which arises from the quantum fluctuations in an interacting BEC.
In many architectures for fault-tolerant quantum computing universality is achieved by a combination of Clifford group unitary operators and preparation of suitable nonstabilizer states, the so-called magic states. Universality is possible even for some fairly noisy nonstabilizer states, as distillation can convert many noisy copies into fewer purer magic states. Here we propose protocols that exploit multiple species of magic states in surprising ways. These protocols provide examples of previously unobserved phenomena that are analogous to catalysis and activation well known in entanglement theory.
Characterization and calibration of piezoelectric polymers in situ measurements of body vibrations
(2011)
Piezoelectric polymers are known for their flexibility in applications, mainly due to their bending ability, robustness, and variable sensor geometry. It is an optimal material for minimal-invasive investigations in vibrational systems, e.g., for wood, where acoustical impedance matches particularly well. Many applications may be imagined, e. g., monitoring of buildings, vehicles, machinery, alarm systems, such that our investigations may have a large impact on technology. Longitudinal piezoelectricity converts mechanical vibrations normal to the polymer-film plane into an electrical signal, and the respective piezoelectric coefficient needs to be carefully determined in dependence on the relevant material parameters. In order to evaluate efficiency and durability for piezopolymers, we use polyvinylidene fluoride and measure the piezoelectric coefficient with respect to static pressure, amplitude of the dynamically applied force, and long-term stability. A known problem is the slow relaxation of the material towards equilibrium, if the external pressure changes; here, we demonstrate how to counter this problem with careful calibration. Since our focus is on acoustical measurements, we determine accurately the frequency response curve - for acoustics probably the most important characteristic. Eventually, we show that our piezopolymer transducers can be used as a calibrated acoustical sensors for body vibration measurements on a wooden musical instrument, where it is important to perform minimal-invasive measurements. A comparison with the simultaneously recorded airborne sound yields important insight of the mechanism of sound radiation in comparison with the sound propagating in the material. This is especially important for transient signals, where not only the long-living eigenmodes contribute to the sound radiation. Our analyses support that piezopolymer sensors can be employed as a general tool for the determination of the internal dynamics of vibrating systems.
The present thesis was born and evolved within the RAdial Velocity Experiment (RAVE) with the goal of measuring chemical abundances from the RAVE spectra and exploit them to investigate the chemical gradients along the plane of the Galaxy to provide constraints on possible Galactic formation scenarios. RAVE is a large spectroscopic survey which aims to observe spectroscopically ~10^6 stars by the end of 2012 and measures their radial velocities, atmospheric parameters and chemical abundances. The project makes use of the UK Schmidt telescope at Australian Astronomical Observatory (AAO) in Siding Spring, Australia, equipped with the multiobject spectrograph 6dF. To date, RAVE collected and measured more than 450,000 spectra. The precision of the chemical abundance estimations depends on the reliability of the atomic and atmosphere parameters adopted (in particular the oscillator strengths of the absorption lines and the effective temperature, gravity, and metallicity of the stars measured). Therefore we first identified 604 absorption lines in the RAVE wavelength range and refined their oscillator strengths with an inverse spectral analysis. Then, we improved the RAVE stellar parameters by modifying the RAVE pipeline and the spectral library the pipeline rely on. The modifications removed some systematic errors in stellar parameters discovered during this work. To obtain chemical abundances, we developed two different processing pipelines. Both of them perform chemical abundances measurements by assuming stellar atmospheres in Local Thermodynamic Equilibrium (LTE). The first one determines elements abundances from equivalent widths of absorption lines. Since this pipeline showed poor sensibility on abundances relative to iron, it has been superseded. The second one exploits the chi^2 minimization technique between observed and model spectra. Thanks to its precision, it has been adopted for the creation of the RAVE chemical catalogue. This pipeline provides abundances with uncertains of about ~0.2dex for spectra with signal-to-noise ratio S/N>40 and ~0.3dex for spectra with 20>S/N>40. For this work, the pipeline measured chemical abundances up to 7 elements for 217,358 RAVE stars. With these data we investigated the chemical gradients along the Galactic radius of the Milky Way. We found that stars with low vertical velocities |W| (which stay close to the Galactic plane) show an iron abundance gradient in agreement with previous works (~-0.07$ dex kpc^-1) whereas stars with larger |W| which are able to reach larger heights above the Galactic plane, show progressively flatter gradients. The gradients of the other elements follow the same trend. This suggests that an efficient radial mixing acts in the Galaxy or that the thick disk formed from homogeneous interstellar matter. In particular, we found hundreds of stars which can be kinetically classified as thick disk stars exhibiting a chemical composition typical of the thin disk. A few stars of this kind have already been detected by other authors, and their origin is still not clear. One possibility is that they are thin disk stars kinematically heated, and then underwent an efficient radial mixing process which blurred (and so flattened) the gradient. Alternatively they may be a transition population" which represents an evolutionary bridge between thin and thick disk. Our analysis shows that the two explanations are not mutually exclusive. Future follow-up high resolution spectroscopic observations will clarify their role in the Galactic disk evolution.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
We present climatic consequences of the Representative Concentration Pathways (RCPs) using the coupled climate model CLIMBER-3 alpha, which contains a statistical-dynamical atmosphere and a three-dimensional ocean model. We compare those with emulations of 19 state-of-the-art atmosphere-ocean general circulation models (AOGCM) using MAGICC6. The RCPs are designed as standard scenarios for the forthcoming IPCC Fifth Assessment Report to span the full range of future greenhouse gas (GHG) concentrations pathways currently discussed. The lowest of the RCP scenarios, RCP3-PD, is projected in CLIMBER-3 alpha to imply a maximal warming by the middle of the 21st century slightly above 1.5 degrees C and a slow decline of temperatures thereafter, approaching today's level by 2500. We identify two mechanisms that slow down global cooling after GHG concentrations peak: The known inertia induced by mixing-related oceanic heat uptake; and a change in oceanic convection that enhances ocean heat loss in high latitudes, reducing the surface cooling rate by almost 50%. Steric sea level rise under the RCP3-PD scenario continues for 200 years after the peak in surface air temperatures, stabilizing around 2250 at 30 cm. This contrasts with around 1.3 m of steric sea level rise by 2250, and 2 m by 2500, under the highest scenario, RCP8.5. Maximum oceanic warming at intermediate depth (300-800 m) is found to exceed that of the sea surface by the second half of the 21st century under RCP3-PD. This intermediate-depth warming persists for centuries even after surface temperatures have returned to present-day values, with potential consequences for marine ecosystems, oceanic methane hydrates, and ice-shelf stability. Due to an enhanced land-ocean temperature contrast, all scenarios yield an intensification of monsoon rainfall under global warming.
The thermal behavior of poly(methoxydiethylenglycol acrylate) (PMDEGA) is studied in thin hydrogel films on solid supports and is compared with the behavior in aqueous solution. The PMDEGA hydrogel film thickness is varied from 2 to 422 nm. Initially, these films are homogenous, as measured with optical microscopy, atomic force microscopy, X-ray reflectivity, and grazing-incidence small-angle X-ray scattering (GISAXS). However, they tend to de-wet when stored under ambient conditions. Along the surface normal, no long-ranged correlations between substrate and film surface are detected with GISAXS, due to the high mobility of the polymer at room temperature. The swelling of the hydrogel films as a function of the water vapor pressure and the temperature are probed for saturated water vapor pressures between 2,380 and 3,170 Pa. While the swelling capability is found to increase with water vapor pressure, swelling in dependence on the temperature revealed a collapse phase transition of a lower critical solution temperature type. The transition temperature decreases from 40.6 A degrees C to 36.6 A degrees C with increasing film thickness, but is independent of the thickness for very thin films below a thickness of 40 nm. The observed transition temperature range compares well with the cloud points observed in dilute (0.1 wt.%) and semi-dilute (5 wt.%) solution which decrease from 45 A degrees C to 39 A degrees C with increasing concentration.
Prerequisite for the rational design of functional organic materials with tailor-made electronic properties is the knowledge of the structure-property relationship for the specific class of molecules under consideration. This encouraged us to systematically study the influence of the molecular structure and substitution pattern of aromatically substituted 1,3,4-oxadiazoles on the electronic properties and packing motifs of these molecules and on the interplay of these factors. For this purpose, seven diphenyl-oxadiazoles equipped with methyl substituents in the ortho- and meta-position(s) were synthesized and characterized. Absorption and fluorescence spectra in solution served here as tools to monitor substitution-induced changes in the electronic properties of the individual molecules whereas X-ray and optical measurements in the solid state provided information on the interplay of electronic and packing effects. In solution, the spectral position of the absorption maximum, the size of Stokes shift, and the fluorescence quantum yield are considerably affected by ortho-substitution in three or four ortho-positions. This results in blue shifted absorption bands, increased Stokes shifts, and reduced fluorescence quantum yields whereas the spectral position and vibrational structure of the emission bands remain more or less unaffected. In the crystalline state, however, the spectral position and shape of the emission bands display a strong dependence on the molecular structure and/or packing motifs that seem to control the amount of dye-dye-interactions. These observations reveal the limited value of commonly reported absorption and fluorescence measurements in solution for a straightforward comparison of spectroscopic results with single X-ray crystallography. This underlines the importance of solid state spectroscopic studies for a better understanding of the interplay of electronic effects and molecular order.
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
We analyze the equilibrium properties of a weakly interacting, trapped quasi-one-dimensional Bose gas at finite temperatures and compare different theoretical approaches. We focus in particular on two stochastic theories: a number-conserving Bogoliubov (NCB) approach and a stochastic Gross-Pitaevskii equation (SGPE) that have been extensively used in numerical simulations. Equilibrium properties like density profiles, correlation functions, and the condensate statistics are compared to predictions based upon a number of alternative theories. We find that due to thermal phase fluctuations, and the corresponding condensate depletion, the NCB approach loses its validity at relatively low temperatures. This can be attributed to the change in the Bogoliubov spectrum, as the condensate gets thermally depleted, and to large fluctuations beyond perturbation theory. Although the two stochastic theories are built on different thermodynamic ensembles (NCB, canonical; SGPE, grand-canonical), they yield the correct condensate statistics in a large Bose-Einstein condensate (BEC) (strong enough particle interactions). For smaller systems, the SGPE results are prone to anomalously large number fluctuations, well known for the grand-canonical, ideal Bose gas. Based on the comparison of the above theories to the modified Popov approach, we propose a simple procedure for approximately extracting the Penrose-Onsager condensate from first-and second-order correlation functions that is both computationally convenient and of potential use to experimentalists. This also clarifies the link between condensate and quasicondensate in the Popov theory of low-dimensional systems.
We propose a simple complexity indicator of classical Liouvillian dynamics, namely the separability entropy, which determines the logarithm of an effective number of terms in a Schmidt decomposition of phase space density with respect to an arbitrary fixed product basis. We show that linear growth of separability entropy provides a stricter criterion of complexity than Kolmogorov-Sinai entropy, namely it requires that the dynamics be exponentially unstable, nonlinear, and non-Markovian.
Given some observable H on a finite-dimensional quantum system, we investigate the typical properties of random state vectors vertical bar psi >> that have a fixed expectation value < psi vertical bar H vertical bar psi > = E with respect to H. Under some conditions on the spectrum, we prove that this manifold of quantum states shows a concentration of measure phenomenon: any continuous function on this set is almost everywhere close to its mean. We also give a method to estimate the corresponding expectation values analytically, and we prove a formula for the typical reduced density matrix in the case that H is a sum of local observables. We discuss the implications of our results as new proof tools in quantum information theory and to study phenomena in quantum statistical mechanics. As a by-product, we derive a method to sample the resulting distribution numerically, which generalizes the well-known Gaussian method to draw random states from the sphere.
Symmetry-breaking bifurcations have been studied for convection in a nonrotating spherical shell whose outer radius is twice the inner radius, under the influence of an externally applied central force field with a radial dependence proportional to 1/r(5). This work is motivated by the GeoFlow experiment, which is performed under microgravity condition at the International Space Station where this particular central force can be generated. In order to predict the observable patterns, simulations together with path-following techniques and stability computations have been applied. Branches of axisymmetric, octahedral, and seven-cell solutions have been traced. The bifurcations producing them have been identified and their stability ranges determined. At higher Rayleigh numbers, time-periodic states with a complex spatiotemporal symmetry are found, which we call breathing patterns.
Langmuir monolayer degradation (LMD) experiments with polymers possessing outstanding biomedical application potential yield information regarding the kinetics of their hydrolytic or enzymatic chain scission under well-defined and adjustable degradation conditions. A brief review is given of LMD investigations, including the author's own work on 2-dimensional (2D) polymer systems, providing chain scission data, which are not disturbed by simultaneously occurring transport phenomena, such as water penetration into the sample or transport of scission fragments out of the sample.
A knowledge-based approach for the description and simulation of polymer hydrolytic and enzymatic degradation based on a combination of fast LMD experiments and computer simulation of the water penetration is briefly introduced. Finally, the advantages and disadvantages of this approach are discussed.
Supermassive black holes are a fundamental component of the universe in general and of galaxies in particular. Almost every massive galaxy harbours a supermassive black hole (SMBH) in its center. Furthermore, there is a close connection between the growth of the SMBH and the evolution of its host galaxy, manifested in the relationship between the mass of the black hole and various properties of the galaxy's spheroid component, like its stellar velocity dispersion, luminosity or mass. Understanding this relationship and the growth of SMBHs is essential for our picture of galaxy formation and evolution. In this thesis, I make several contributions to improve our knowledge on the census of SMBHs and on the coevolution of black holes and galaxies. The first route I follow on this road is to obtain a complete census of the black hole population and its properties. Here, I focus particularly on active black holes, observable as Active Galactic Nuclei (AGN) or quasars. These are found in large surveys of the sky. In this thesis, I use one of these surveys, the Hamburg/ESO survey (HES), to study the AGN population in the local volume (z~0). The demographics of AGN are traditionally represented by the AGN luminosity function, the distribution function of AGN at a given luminosity. I determined the local (z<0.3) optical luminosity function of so-called type 1 AGN, based on the broad band B_J magnitudes and AGN broad Halpha emission line luminosities, free of contamination from the host galaxy. I combined this result with fainter data from the Sloan Digital Sky Survey (SDSS) and constructed the best current optical AGN luminosity function at z~0. The comparison of the luminosity function with higher redshifts supports the current notion of 'AGN downsizing', i.e. the space density of the most luminous AGN peaks at higher redshifts and the space density of less luminous AGN peaks at lower redshifts. However, the AGN luminosity function does not reveal the full picture of active black hole demographics. This requires knowledge of the physical quantities, foremost the black hole mass and the accretion rate of the black hole, and the respective distribution functions, the active black hole mass function and the Eddington ratio distribution function. I developed a method for an unbiased estimate of these two distribution functions, employing a maximum likelihood technique and fully account for the selection function. I used this method to determine the active black hole mass function and the Eddington ratio distribution function for the local universe from the HES. I found a wide intrinsic distribution of black hole accretion rates and black hole masses. The comparison of the local active black hole mass function with the local total black hole mass function reveals evidence for 'AGN downsizing', in the sense that in the local universe the most massive black holes are in a less active stage then lower mass black holes. The second route I follow is a study of redshift evolution in the black hole-galaxy relations. While theoretical models can in general explain the existence of these relations, their redshift evolution puts strong constraints on these models. Observational studies on the black hole-galaxy relations naturally suffer from selection effects. These can potentially bias the conclusions inferred from the observations, if they are not taken into account. I investigated the issue of selection effects on type 1 AGN samples in detail and discuss various sources of bias, e.g. an AGN luminosity bias, an active fraction bias and an AGN evolution bias. If the selection function of the observational sample and the underlying distribution functions are known, it is possible to correct for this bias. I present a fitting method to obtain an unbiased estimate of the intrinsic black hole-galaxy relations from samples that are affected by selection effects. Third, I try to improve our census of dormant black holes and the determination of their masses. One of the most important techniques to determine the black hole mass in quiescent galaxies is via stellar dynamical modeling. This method employs photometric and kinematic observations of the galaxy and infers the gravitational potential from the stellar orbits. This method can reveal the presence of the black hole and give its mass, if the sphere of the black hole's gravitational influence is spatially resolved. However, usually the presence of a dark matter halo is ignored in the dynamical modeling, potentially causing a bias on the determined black hole mass. I ran dynamical models for a sample of 12 galaxies, including a dark matter halo. For galaxies for which the black hole's sphere of influence is not well resolved, I found that the black hole mass is systematically underestimated when the dark matter halo is ignored, while there is almost no effect for galaxies with well resolved sphere of influence.
Ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes. Ground-based gamma-ray astronomy has a huge potential in astrophysics, particle physics and cosmology. CTA is an international initiative to build the next generation instrument, with a factor of 5-10 improvement in sensitivity in the 100 GeV-10 TeV range and the extension to energies well below 100 GeV and above 100 TeV. CTA will consist of two arrays (one in the north, one in the south) for full sky coverage and will be operated as open observatory. The design of CTA is based on currently available technology. This document reports on the status and presents the major design concepts of CTA.
Ground-based gamma-ray astronomy has had a major breakthrough with the impressive results obtained using systems of imaging atmospheric Cherenkov telescopes. Ground-based gamma-ray astronomy has a huge potential in astrophysics, particle physics and cosmology. CTA is an international initiative to build the next generation instrument, with a factor of 5-10 improvement in sensitivity in the 100 GeV-10 TeV range and the extension to energies well below 100 GeV and above 100 TeV. CTA will consist of two arrays (one in the north, one in the south) for full sky coverage and will be operated as open observatory. The design of CTA is based on currently available technology. This document reports on the status and presents the major design concepts of CTA.
We consider the nonlinear extension of the Kuramoto model of globally coupled phase oscillators where the phase shift in the coupling function depends on the order parameter. A bifurcation analysis of the transition from fully synchronous state to partial synchrony is performed. We demonstrate that for small ensembles it is typically mediated by stable cluster states, that disappear with creation of heteroclinic cycles, while for a larger number of oscillators a direct transition from full synchrony to a periodic or a quasiperiodic regime occurs.
We report the detection of pulsed gamma rays from the Crab pulsar at energies above 100 giga-electron volts (GeV) with the Very Energetic Radiation Imaging Telescope Array System (VERITAS) array of atmospheric Cherenkov telescopes. The detection cannot be explained on the basis of current pulsar models. The photon spectrum of pulsed emission between 100 mega-electron volts and 400 GeV is described by a broken power law that is statistically preferred over a power law with an exponential cutoff. It is unlikely that the observation can be explained by invoking curvature radiation as the origin of the observed gamma rays above 100 GeV. Our findings require that these gamma rays be produced more than 10 stellar radii from the neutron star.
Change points in time series are perceived as isolated singularities where two regular trends of a given signal do not match. The detection of such transitions is of fundamental interest for the understanding of the system's internal dynamics or external forcings. In practice observational noise makes it difficult to detect such change points in time series. In this work we elaborate on a Bayesian algorithm to estimate the location of the singularities and to quantify their credibility. We validate the performance and sensitivity of our inference method by estimating change points of synthetic data sets. As an application we use our algorithm to analyze the annual flow volume of the Nile River at Aswan from 1871 to 1970, where we confirm a well-established significant transition point within the time series.
Die vorliegende Arbeit versammelt zwei einleitende Kapitel und zehn Essays, die sich als kritisch-konstruktive Beiträge zu einem "erlebenden Verstehen" (Buck) von Physik lesen lassen. Die traditionelle Anlage von Schulphysik zielt auf eine systematische Darstellung naturwissenschaftlichen Wissens, das dann an ausgewählten Beispielen angewendet wird: Schulexperimente beweisen die Aussagen der Systematik (oder machen sie wenigstens plausibel), ausgewählte Phänomene werden erklärt. In einem solchen Rahmen besteht jedoch leicht die Gefahr, den Bezug zur Lebenswirklichkeit oder den Interessen der Schüler zu verlieren. Diese Problematik ist seit mindestens 90 Jahren bekannt, didaktische Antworten - untersuchendes Lernen, Kontextualisierung, Schülerexperimente etc. - adressieren allerdings eher Symptome als Ursachen. Naturwissenschaft wird dadurch spannend, dass sie ein spezifisch investigatives Weltverhältnis stiftet: man müsste gleichsam nicht Wissen, sondern "Fragen lernen" (und natürlich auch, wie Antworten gefunden werden...). Doch wie kann dergleichen auf dem Niveau von Schulphysik aussehen, was für einen theoretischen Rahmen kann es hier geben? In den gesammelten Arbeiten wird einigen dieser Spuren nachgegangen: Der Absage an das zu modellhafte Denken in der phänomenologischen Optik, der Abgrenzung formal-mathematischen Denkens gegen wirklichkeitsnähere Formen naturwissenschaftlicher Denkbewegungen und Evidenz, dem Potential alternativer Interpretationen von "Physikunterricht", der Frage nach dem "Verstehen" u.a. Dabei werden nicht nur Bezüge zum modernen bildungstheoretischen Paradigma der Kompetenz sichtbar, sondern es wird auch versucht, eine ganze Reihe konkrete (schul-)physikalische Beispiele dafür zu geben, was passiert, wenn nicht schon gewusste Antworten Thema werden, sondern Expeditionen, die sich der physischen Welt widmen: Die Schlüsselbegriffe des Fachs, die Methoden der Datenerhebung und Interpretation, die Such- und Denkbewegungen kommen dabei auf eine Weise zur Sprache, die sich nicht auf die Fachsystematik abstützen möchte, sondern diese motivieren, konturieren und verständlich machen will.
Genetic differentiation in the competitive and reproductive ability of invading populations can result from genetic Allee effects or r/K selection at the local or range-wide scale. However, the neutral relatedness of populations may either mask or falsely suggest adaptation and genetic Allee effects.
In a common-garden experiment, we investigated the competitive and reproductive ability of invasive Senecio inaequidens populations that vary in neutral genetic diversity, population age and field vegetation cover. To account for population relatedness, we analysed the experimental results with 'animal models' adopted from quantitative genetics.
Consistent with adaptive r/K differentiation at local scales, we found that genotypes from low-competition environments invest more in reproduction and are more sensitive to competition. By contrast, apparent effects of large-scale r/K differentiation and apparent genetic Allee effects can largely be explained by neutral population relatedness.
Invading populations should not be treated as homogeneous groups, as they may adapt quickly to small-scale environmental variation in the invaded range. Furthermore, neutral population differentiation may strongly influence invasion dynamics and should be accounted for in analyses of common-garden experiments.