Refine
Year of publication
- 2011 (187) (remove)
Document Type
- Article (150)
- Doctoral Thesis (26)
- Review (3)
- Monograph/Edited Volume (2)
- Other (2)
- Postprint (2)
- Habilitation Thesis (1)
- Master's Thesis (1)
Is part of the Bibliography
- yes (187)
Keywords
Institute
- Institut für Physik und Astronomie (187) (remove)
We report on the gamma-ray activity of the blazar Mrk 501 during the first 480 days of Fermi operation. We find that the average Large Area Telescope (LAT) gamma-ray spectrum of Mrk 501 can be well described by a single power-law function with a photon index of 1.78 +/- 0.03. While we observe relatively mild flux variations with the Fermi-LAT (within less than a factor of two), we detect remarkable spectral variability where the hardest observed spectral index within the LAT energy range is 1.52 +/- 0.14, and the softest one is 2.51 +/- 0.20. These unexpected spectral changes do not correlate with the measured flux variations above 0.3 GeV. In this paper, we also present the first results from the 4.5 month long multifrequency campaign (2009 March 15-August 1) on Mrk 501, which included the Very Long Baseline Array (VLBA), Swift, RXTE, MAGIC, and VERITAS, the F-GAMMA, GASP-WEBT, and other collaborations and instruments which provided excellent temporal and energy coverage of the source throughout the entire campaign. The extensive radio to TeV data set from this campaign provides us with the most detailed spectral energy distribution yet collected for this source during its relatively low activity. The average spectral energy distribution of Mrk 501 is well described by the standard one-zone synchrotron self-Compton (SSC) model. In the framework of this model, we find that the dominant emission region is characterized by a size less than or similar to 0.1 pc (comparable within a factor of few to the size of the partially resolved VLBA core at 15-43 GHz), and that the total jet power (similar or equal to 10(44) erg s(-1)) constitutes only a small fraction (similar to 10(-3)) of the Eddington luminosity. The energy distribution of the freshly accelerated radiating electrons required to fit the time-averaged data has a broken power-law form in the energy range 0.3 GeV-10 TeV, with spectral indices 2.2 and 2.7 below and above the break energy of 20 GeV. We argue that such a form is consistent with a scenario in which the bulk of the energy dissipation within the dominant emission zone of Mrk 501 is due to relativistic, proton-mediated shocks. We find that the ultrarelativistic electrons and mildly relativistic protons within the blazar zone, if comparable in number, are in approximate energy equipartition, with their energy dominating the jet magnetic field energy by about two orders of magnitude.
Supermassive black holes are a fundamental component of the universe in general and of galaxies in particular. Almost every massive galaxy harbours a supermassive black hole (SMBH) in its center. Furthermore, there is a close connection between the growth of the SMBH and the evolution of its host galaxy, manifested in the relationship between the mass of the black hole and various properties of the galaxy's spheroid component, like its stellar velocity dispersion, luminosity or mass. Understanding this relationship and the growth of SMBHs is essential for our picture of galaxy formation and evolution. In this thesis, I make several contributions to improve our knowledge on the census of SMBHs and on the coevolution of black holes and galaxies. The first route I follow on this road is to obtain a complete census of the black hole population and its properties. Here, I focus particularly on active black holes, observable as Active Galactic Nuclei (AGN) or quasars. These are found in large surveys of the sky. In this thesis, I use one of these surveys, the Hamburg/ESO survey (HES), to study the AGN population in the local volume (z~0). The demographics of AGN are traditionally represented by the AGN luminosity function, the distribution function of AGN at a given luminosity. I determined the local (z<0.3) optical luminosity function of so-called type 1 AGN, based on the broad band B_J magnitudes and AGN broad Halpha emission line luminosities, free of contamination from the host galaxy. I combined this result with fainter data from the Sloan Digital Sky Survey (SDSS) and constructed the best current optical AGN luminosity function at z~0. The comparison of the luminosity function with higher redshifts supports the current notion of 'AGN downsizing', i.e. the space density of the most luminous AGN peaks at higher redshifts and the space density of less luminous AGN peaks at lower redshifts. However, the AGN luminosity function does not reveal the full picture of active black hole demographics. This requires knowledge of the physical quantities, foremost the black hole mass and the accretion rate of the black hole, and the respective distribution functions, the active black hole mass function and the Eddington ratio distribution function. I developed a method for an unbiased estimate of these two distribution functions, employing a maximum likelihood technique and fully account for the selection function. I used this method to determine the active black hole mass function and the Eddington ratio distribution function for the local universe from the HES. I found a wide intrinsic distribution of black hole accretion rates and black hole masses. The comparison of the local active black hole mass function with the local total black hole mass function reveals evidence for 'AGN downsizing', in the sense that in the local universe the most massive black holes are in a less active stage then lower mass black holes. The second route I follow is a study of redshift evolution in the black hole-galaxy relations. While theoretical models can in general explain the existence of these relations, their redshift evolution puts strong constraints on these models. Observational studies on the black hole-galaxy relations naturally suffer from selection effects. These can potentially bias the conclusions inferred from the observations, if they are not taken into account. I investigated the issue of selection effects on type 1 AGN samples in detail and discuss various sources of bias, e.g. an AGN luminosity bias, an active fraction bias and an AGN evolution bias. If the selection function of the observational sample and the underlying distribution functions are known, it is possible to correct for this bias. I present a fitting method to obtain an unbiased estimate of the intrinsic black hole-galaxy relations from samples that are affected by selection effects. Third, I try to improve our census of dormant black holes and the determination of their masses. One of the most important techniques to determine the black hole mass in quiescent galaxies is via stellar dynamical modeling. This method employs photometric and kinematic observations of the galaxy and infers the gravitational potential from the stellar orbits. This method can reveal the presence of the black hole and give its mass, if the sphere of the black hole's gravitational influence is spatially resolved. However, usually the presence of a dark matter halo is ignored in the dynamical modeling, potentially causing a bias on the determined black hole mass. I ran dynamical models for a sample of 12 galaxies, including a dark matter halo. For galaxies for which the black hole's sphere of influence is not well resolved, I found that the black hole mass is systematically underestimated when the dark matter halo is ignored, while there is almost no effect for galaxies with well resolved sphere of influence.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
This thesis covers the topic ”Thinning and Turbulence in Aqueous Films”. Experimental studies in two-dimensional systems gained an increasing amount of attention during the last decade. Thin liquid films serve as paradigms of atmospheric convection, thermal convection in the Earth’s mantle or turbulence in magnetohydrodynamics. Recent research on colloids, interfaces and nanofluids lead to advances in the developtment of micro-mixers (lab-on-a-chip devices). In this project a detailed description of a thin film experiment with focus on the particular surface forces is presented. The impact of turbulence on the thinning of liquid films which are oriented parallel to the gravitational force is studied. An experimental setup was developed which permits the capturing of thin film interference patterns under controlled surface and atmospheric conditions. The measurement setup also serves as a prototype of a mixer on the basis of thermally induced turbulence in liquid thin films with thicknesses in the nanometer range. The convection is realized by placing a cooled copper rod in the center of the film. The temperature gradient between the rod and the atmosphere results in a density gradient in the liquid film, so that different buoyancies generate turbulence. In the work at hand the thermally driven convection is characterized by a newly developed algorithm, named Cluster Imaging Velocimetry (CIV). This routine determines the flow relevant vector fields (velocity and deformation). On the basis of these insights the flow in the experiment was investigated with respect to its mixing properties. The mixing characteristics were compared to theoretical models and mixing efficiency of the flow scheme calculated. The gravitationally driven thinning of the liquid film was analyzed under the influence of turbulence. Strong shear forces lead to the generation of ultra-thin domains which consist of Newton black film. Due to the exponential expansion of the thin areas and the efficient mixing, this two-phase flow rapidly turns into the convection of only ultra-thin film. This turbulence driven transition was observed and quantified for the first time. The existence of stable convection in liquid nanofilms was proven for the first time in the context of this work.
Um Extremereignisse in der Dynamik des indischen Sommermonsuns (ISM) in der geologischen Vergangenheit zu identifizieren, schlage ich einen neuartigen Ansatz basierend auf der Quantifikation von Fluktuationen in einem nichtlinearen Ähnlichkeitsmaß vor. Dieser reagiert empfindlich auf Zeitabschnitte mit deutlichen Veränderungen in der dynamischen Komplexität kurzer Zeitreihen. Ein mathematischer Zusammenhang zwischen dem neuen Maß und dynamischen Invarianten des zugrundeliegenden Systems wie fraktalen Dimensionen und Lyapunovexponenten wird analytisch hergeleitet. Weiterhin entwickle ich einen statistischen Test zur Schätzung der Signifikanz der so identifizierten dynamischen Übergänge. Die Stärken der Methode werden durch die Aufdeckung von Bifurkationsstrukturen in paradigmatischen Modellsystemen nachgewiesen, wobei im Vergleich zu den traditionellen Lyapunovexponenten eine Identifikation komplexerer dynamischer Übergänge möglich ist. Wir wenden die neu entwickelte Methode zur Analyse realer Messdaten an, um ausgeprägte dynamische Veränderungen auf Zeitskalen von Jahrtausenden in Klimaproxydaten des südasiatischen Sommermonsunsystems während des Pleistozäns aufzuspüren. Dabei zeigt sich, dass viele dieser Übergänge durch den externen Einfluss der veränderlichen Sonneneinstrahlung, sowie durch dem Klimasystem interne Einflussfaktoren auf das Monsunsystem (Eiszeitzyklen der nördlichen Hemisphäre und Einsatz der tropischenWalkerzirkulation) induziert werden. Trotz seiner Anwendbarkeit auf allgemeine Zeitreihen ist der diskutierte Ansatz besonders zur Untersuchung von kurzen Paläoklimazeitreihen geeignet. Die während des ISM über dem indischen Subkontinent fallenden Niederschläge treten, bedingt durch die zugrundeliegende Dynamik der atmosphärischen Zirkulation und topographische Einflüsse, in äußerst komplexen, raumzeitlichen Mustern auf. Ich stelle eine detaillierte Analyse der Sommermonsunniederschläge über der indischen Halbinsel vor, die auf Ereignissynchronisation (ES) beruht, einem Maß für die nichtlineare Korrelation von Punktprozessen wie Niederschlagsereignissen. Mit hierarchischen Clusteringalgorithmen identifiziere ich zunächst Regionen mit besonders kohärenten oder homogenen Monsunniederschlägen. Dabei können auch die Zeitverzögerungsmuster von Regenereignissen rekonstruiert werden. Darüber hinaus führe ich weitere Analysen auf Basis der Theorie komplexer Netzwerke durch. Diese Studien ermöglichen wertvolle Einsichten in räumliche Organisation, Skalen und Strukturen von starken Niederschlagsereignissen oberhalb der 90% und 94% Perzentilen während des ISM (Juni bis September). Weiterhin untersuche ich den Einfluss von verschiedenen, kritischen synoptischen Systemen der Atmosphäre sowie der steilen Topographie des Himalayas auf diese Niederschlagsmuster. Die vorgestellte Methode ist nicht nur geeignet, die Struktur extremer Niederschlagsereignisse zu visualisieren, sondern kann darüber hinaus über der Region atmosphärische Transportwege von Wasserdampf und Feuchtigkeitssenken auf dekadischen Skalen identifizieren.Weiterhin wird ein einfaches, auf komplexen Netzwerken basierendes Verfahren zur Entschlüsselung der räumlichen Feinstruktur und Zeitentwicklung von Monsunniederschlagsextremen während der vergangenen 60 Jahre vorgestellt.
This Thesis puts its focus on the physics of neutron stars and its description with methods of numerical relativity. In the first step, a new numerical framework the Whisky2D code will be developed, which solves the relativistic equations of hydrodynamics in axisymmetry. Therefore we consider an improved formulation of the conserved form of these equations. The second part will use the new code to investigate the critical behaviour of two colliding neutron stars. Considering the analogy to phase transitions in statistical physics, we will investigate the evolution of the entropy of the neutron stars during the whole process. A better understanding of the evolution of thermodynamical quantities, like the entropy in critical process, should provide deeper understanding of thermodynamics in relativity. More specifically, we have written the Whisky2D code, which solves the general-relativistic hydrodynamics equations in a flux-conservative form and in cylindrical coordinates. This of course brings in 1/r singular terms, where r is the radial cylindrical coordinate, which must be dealt with appropriately. In the above-referenced works, the flux operator is expanded and the 1/r terms, not containing derivatives, are moved to the right-hand-side of the equation (the source term), so that the left hand side assumes a form identical to the one of the three-dimensional (3D) Cartesian formulation. We call this the standard formulation. Another possibility is not to split the flux operator and to redefine the conserved variables, via a multiplication by r. We call this the new formulation. The new equations are solved with the same methods as in the Cartesian case. From a mathematical point of view, one would not expect differences between the two ways of writing the differential operator, but, of course, a difference is present at the numerical level. Our tests show that the new formulation yields results with a global truncation error which is one or more orders of magnitude smaller than those of alternative and commonly used formulations. The second part of the Thesis uses the new code for investigations of critical phenomena in general relativity. In particular, we consider the head-on-collision of two neutron stars in a region of the parameter space where two final states a new stable neutron star or a black hole, lay close to each other. In 1993, Choptuik considered one-parameter families of solutions, S[P], of the Einstein-Klein-Gordon equations for a massless scalar field in spherical symmetry, such that for every P > P⋆, S[P] contains a black hole and for every P < P⋆, S[P] is a solution not containing singularities. He studied numerically the behavior of S[P] as P → P⋆ and found that the critical solution, S[P⋆], is universal, in the sense that it is approached by all nearly-critical solutions regardless of the particular family of initial data considered. All these phenomena have the common property that, as P approaches P⋆, S[P] approaches a universal solution S[P⋆] and that all the physical quantities of S[P] depend only on |P − P⋆|. The first study of critical phenomena concerning the head-on collision of NSs was carried out by Jin and Suen in 2007. In particular, they considered a series of families of equal-mass NSs, modeled with an ideal-gas EOS, boosted towards each other and varied the mass of the stars, their separation, velocity and the polytropic index in the EOS. In this way they could observe a critical phenomenon of type I near the threshold of black-hole formation, with the putative solution being a nonlinearly oscillating star. In a successive work, they performed similar simulations but considering the head-on collision of Gaussian distributions of matter. Also in this case they found the appearance of type-I critical behaviour, but also performed a perturbative analysis of the initial distributions of matter and of the merged object. Because of the considerable difference found in the eigenfrequencies in the two cases, they concluded that the critical solution does not represent a system near equilibrium and in particular not a perturbed Tolmann-Oppenheimer-Volkoff (TOV) solution. In this Thesis we study the dynamics of the head-on collision of two equal-mass NSs using a setup which is as similar as possible to the one considered above. While we confirm that the merged object exhibits a type-I critical behaviour, we also argue against the conclusion that the critical solution cannot be described in terms of equilibrium solution. Indeed, we show that, in analogy with what is found in, the critical solution is effectively a perturbed unstable solution of the TOV equations. Our analysis also considers fine-structure of the scaling relation of type-I critical phenomena and we show that it exhibits oscillations in a similar way to the one studied in the context of scalar-field critical collapse.
Eumelanin ist ein Fluorophor mit teilweise recht ungewöhnlichen spektralen Eigenschaften. Unter anderem konnten in früheren Veröffentlichungen Unterschiede zwischen dem 1- und 2-photonen-angeregtem Fluoreszenzspektrum beobachtet werden, weshalb im nichtlinearen Anregungsfall ein schrittweiser Anregungsprozess vermutet wurde. Um diese und weitere optische Eigenschaften des Eumelanins besser zu verstehen, wurden in der vorliegenden Arbeit vielfältige messmethodische Ansätze der linearen und nichtlinearen Optik an synthetischem Eumelanin in 0,1M NaOH verfolgt. Aus den Ergebnissen wurde ein Modell abgeleitet, welches die beobachteten photonischen Eigenschaften konsistent beschreibt. In diesem kaskadierten Zustandsmodell (Kaskaden-Modell) wird die aufgenommene Photonenenergie schrittweise von Anregungszuständen hoher Übergangsenergien zu Anregungszuständen niedrigerer Übergangsenergien transferiert. Messungen der transienten Absorption ergaben dominante Anteile mit kurzen Lebensdauern im ps-Bereich und ließen damit auf eine hohe Relaxationsgeschwindigkeit entlang der Kaskade schließen. Durch Untersuchung der nichtlinear angeregten Fluoreszenz von verschieden großen Eumelanin-Aggregaten konnte gezeigt werden, dass Unterschiede zwischen dem linear und nichtlinear angeregten Fluoreszenzspektrum nicht nur durch einen schrittweisen Anregungsprozess bei nichtlinearer Anregung sondern auch durch Unterschiede in den Verhältnissen der Quantenausbeuten zwischen kleinen und großen Aggregaten beim Wechsel von linearer zu nichtlinearer Anregung begründet sein können. Durch Bestimmung des Anregungswirkungsquerschnitts und der Anregungspulsdauer-Abhängigkeit der nichtlinear angeregten Fluoreszenz von Eumelanin konnte jedoch ein schrittweiser 2-Photonen-Anregungsprozess über einen Zwischenzustand mit Lebendsdauern im ps-Bereich nachgewiesen werden.
The inspiral and merger of two black holes is among the most exciting and extreme events in our universe. Being one of the loudest sources of gravitational waves, they provide a unique dynamical probe of strong-field general relativity and a fertile ground for the observation of fundamental physics. While the detection of gravitational waves alone will allow us to observe our universe through an entirely new window, combining the information obtained from both gravitational wave and electro-magnetic observations will allow us to gain even greater insight in some of the most exciting astrophysical phenomena. In addition, binary black-hole mergers serve as an intriguing tool to study the geometry of space-time itself. In this dissertation we study the merger process of binary black-holes in a variety of conditions. Our results show that asymmetries in the curvature distribution on the common apparent horizon are correlated to the linear momentum acquired by the merger remnant. We propose useful tools for the analysis of black holes in the dynamical and isolated horizon frameworks and shed light on how the final merger of apparent horizons proceeds after a common horizon has already formed. We connect mathematical theorems with data obtained from numerical simulations and provide a first glimpse on the behavior of these surfaces in situations not accessible to analytical tools. We study electro-magnetic counterparts of super-massive binary black-hole mergers with fully 3D general relativistic simulations of binary black-holes immersed both in a uniform magnetic field in vacuum and in a tenuous plasma. We find that while a direct detection of merger signatures with current electro-magnetic telescopes is unlikely, secondary emission, either by altering the accretion rate of the circumbinary disk or by synchrotron radiation from accelerated charges, may be detectable. We propose a novel approach to measure the electro-magnetic radiation in these simulations and find a non-collimated emission that dominates over the collimated one appearing in the form of dual jets associated with each of the black holes. Finally, we provide an optimized gravitational wave detection pipeline using phenomenological waveforms for signals from compact binary coalescence and show that by including spin effects in the waveform templates, the detection efficiency is drastically improved as well as the bias on recovered source parameters reduced. On the whole, this disseration provides evidence that a multi-messenger approach to binary black-hole merger observations provides an exciting prospect to understand these sources and, ultimately, our universe.
The Arctic is a particularly sensitive area with respect to climate change due to the high surface albedo of snow and ice and the extreme radiative conditions. Clouds and aerosols as parts of the Arctic atmosphere play an important role in the radiation budget, which is, as yet, poorly quantified and understood. The LIDAR (Light Detection And Ranging) measurements presented in this PhD thesis contribute with continuous altitude resolved aerosol profiles to the understanding of occurrence and characteristics of aerosol layers above Ny-Ålesund, Spitsbergen. The attention was turned to the analysis of periods with high aerosol load. As the Arctic spring troposphere exhibits maximum aerosol optical depths (AODs) each year, March and April of both the years 2007 and 2009 were analyzed. Furthermore, stratospheric aerosol layers of volcanic origin were analyzed for several months, subsequently to the eruptions of the Kasatochi and Sarychev volcanoes in summer 2008 and 2009, respectively. The Koldewey Aerosol Raman LIDAR (KARL) is an instrument for the active remote sensing of atmospheric parameters using pulsed laser radiation. It is operated at the AWIPEV research base and was fundamentally upgraded within the framework of this PhD project. It is now equipped with a new telescope mirror and new detection optics, which facilitate atmospheric profiling from 450m above sea level up to the mid-stratosphere. KARL provides highly resolved profiles of the scattering characteristics of aerosol and cloud particles (backscattering, extinction and depolarization) as well as water vapor profiles within the lower troposphere. Combination of KARL data with data from other instruments on site, namely radiosondes, sun photometer, Micro Pulse LIDAR, and tethersonde system, resulted in a comprehensive data set of scattering phenomena in the Arctic atmosphere. The two spring periods March and April 2007 and 2009 were at first analyzed based on meteorological parameters, like local temperature and relative humidity profiles as well as large scale pressure patterns and air mass origin regions. Here, it was not possible to find a clear correlation between enhanced AOD and air mass origin. However, in a comparison of two cloud free periods in March 2007 and April 2009, large AOD values in 2009 coincided with air mass transport through the central Arctic. This suggests the occurrence of aerosol transformation processes during the aerosol transport to Ny-Ålesund. Measurements on 4 April 2009 revealed maximum AOD values of up to 0.12 and aerosol size distributions changing with altitude. This and other performed case studies suggest the differentiation between three aerosol event types and their origin: Vertically limited aerosol layers in dry air, highly variable hygroscopic boundary layer aerosols and enhanced aerosol load across wide portions of the troposphere. For the spring period 2007, the available KARL data were statistically analyzed using a characterization scheme, which is based on optical characteristics of the scattering particles. The scheme was validated using several case studies. Volcanic eruptions in the northern hemisphere in August 2008 and June 2009 arose the opportunity to analyze volcanic aerosol layers within the stratosphere. The rate of stratospheric AOD change was similar within both years with maximum values above 0.1 about three to five weeks after the respective eruption. In both years, the stratospheric AOD persisted at higher rates than usual until the measurements were stopped in late September due to technical reasons. In 2008, up to three aerosol layers were detected, the layer structure in 2009 was characterized by up to six distinct and thin layers which smeared out to one broad layer after about two months. The lowermost aerosol layer was continuously detected at the tropopause altitude. Three case studies were performed, all revealed rather large indices of refraction of m = (1.53–1.55) - 0.02i, suggesting the presence of an absorbing carbonaceous component. The particle radius, derived with inversion calculations, was also similar in both years with values ranging from 0.16 to 0.19 μm. However, in 2009, a second mode in the size distribution was detected at about 0.5 μm. The long term measurements with the Koldewey Aerosol Raman LIDAR in Ny-Ålesund provide the opportunity to study Arctic aerosols in the troposphere and the stratosphere not only in case studies but on longer time scales. In this PhD thesis, both, tropospheric aerosols in the Arctic spring and stratospheric aerosols following volcanic eruptions have been described qualitatively and quantitatively. Case studies and comparative studies with data of other instruments on site allowed for the analysis of microphysical aerosol characteristics and their temporal evolution.
In the present work synchronization phenomena in complex dynamical systems exhibiting multiple time scales have been analyzed. Multiple time scales can be active in different manners. Three different systems have been analyzed with different methods from data analysis. The first system studied is a large heterogenous network of bursting neurons, that is a system with two predominant time scales, the fast firing of action potentials (spikes) and the burst of repetitive spikes followed by a quiescent phase. This system has been integrated numerically and analyzed with methods based on recurrence in phase space. An interesting result are the different transitions to synchrony found in the two distinct time scales. Moreover, an anomalous synchronization effect can be observed in the fast time scale, i.e. there is range of the coupling strength where desynchronization occurs. The second system analyzed, numerically as well as experimentally, is a pair of coupled CO₂ lasers in a chaotic bursting regime. This system is interesting due to its similarity with epidemic models. We explain the bursts by different time scales generated from unstable periodic orbits embedded in the chaotic attractor and perform a synchronization analysis of these different orbits utilizing the continuous wavelet transform. We find a diverse route to synchrony of these different observed time scales. The last system studied is a small network motif of limit cycle oscillators. Precisely, we have studied a hub motif, which serves as elementary building block for scale-free networks, a type of network found in many real world applications. These hubs are of special importance for communication and information transfer in complex networks. Here, a detailed study on the mechanism of synchronization in oscillatory networks with a broad frequency distribution has been carried out. In particular, we find a remote synchronization of nodes in the network which are not directly coupled. We also explain the responsible mechanism and its limitations and constraints. Further we derive an analytic expression for it and show that information transmission in pure phase oscillators, such as the Kuramoto type, is limited. In addition to the numerical and analytic analysis an experiment consisting of electrical circuits has been designed. The obtained results confirm the former findings.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
The present thesis was born and evolved within the RAdial Velocity Experiment (RAVE) with the goal of measuring chemical abundances from the RAVE spectra and exploit them to investigate the chemical gradients along the plane of the Galaxy to provide constraints on possible Galactic formation scenarios. RAVE is a large spectroscopic survey which aims to observe spectroscopically ~10^6 stars by the end of 2012 and measures their radial velocities, atmospheric parameters and chemical abundances. The project makes use of the UK Schmidt telescope at Australian Astronomical Observatory (AAO) in Siding Spring, Australia, equipped with the multiobject spectrograph 6dF. To date, RAVE collected and measured more than 450,000 spectra. The precision of the chemical abundance estimations depends on the reliability of the atomic and atmosphere parameters adopted (in particular the oscillator strengths of the absorption lines and the effective temperature, gravity, and metallicity of the stars measured). Therefore we first identified 604 absorption lines in the RAVE wavelength range and refined their oscillator strengths with an inverse spectral analysis. Then, we improved the RAVE stellar parameters by modifying the RAVE pipeline and the spectral library the pipeline rely on. The modifications removed some systematic errors in stellar parameters discovered during this work. To obtain chemical abundances, we developed two different processing pipelines. Both of them perform chemical abundances measurements by assuming stellar atmospheres in Local Thermodynamic Equilibrium (LTE). The first one determines elements abundances from equivalent widths of absorption lines. Since this pipeline showed poor sensibility on abundances relative to iron, it has been superseded. The second one exploits the chi^2 minimization technique between observed and model spectra. Thanks to its precision, it has been adopted for the creation of the RAVE chemical catalogue. This pipeline provides abundances with uncertains of about ~0.2dex for spectra with signal-to-noise ratio S/N>40 and ~0.3dex for spectra with 20>S/N>40. For this work, the pipeline measured chemical abundances up to 7 elements for 217,358 RAVE stars. With these data we investigated the chemical gradients along the Galactic radius of the Milky Way. We found that stars with low vertical velocities |W| (which stay close to the Galactic plane) show an iron abundance gradient in agreement with previous works (~-0.07$ dex kpc^-1) whereas stars with larger |W| which are able to reach larger heights above the Galactic plane, show progressively flatter gradients. The gradients of the other elements follow the same trend. This suggests that an efficient radial mixing acts in the Galaxy or that the thick disk formed from homogeneous interstellar matter. In particular, we found hundreds of stars which can be kinetically classified as thick disk stars exhibiting a chemical composition typical of the thin disk. A few stars of this kind have already been detected by other authors, and their origin is still not clear. One possibility is that they are thin disk stars kinematically heated, and then underwent an efficient radial mixing process which blurred (and so flattened) the gradient. Alternatively they may be a transition population" which represents an evolutionary bridge between thin and thick disk. Our analysis shows that the two explanations are not mutually exclusive. Future follow-up high resolution spectroscopic observations will clarify their role in the Galactic disk evolution.
Glacial advances constrained by Be-10 exposure dating of bedrock landslides, Kyrgyz Tien Shan
(2011)
Numerous large landslide deposits occur in the Tien Shan, a tectonically active intraplate orogen in Central Asia. Yet their significance in Quaternary landscape evolution and natural hazard assessment remains unresolved due to the lack of "absolute" age constraints. Here we present the first Be-10 exposure ages for three prominent (>10(7) m(3)) bedrock landslides that blocked major rivers and formed lakes, two of which subsequently breached, in the northern Kyrgyz Tien Shan. Three Be-10 ages reveal that one landslide in the Alamyedin River occurred at 11-15 ka, which is consistent with two C-14 ages of gastropod shells from reworked loess capping the landslide. One large landslide in Aksu River is among the oldest documented in semi-arid continental interiors, with a Be-10 age of 63-67 ka. The Ukok River landslide deposit(s) yielded variable Be-10 ages, which may result from multiple landslides, and inheritance of Be-10. Two Be-10 ages of 8.2 and 5.9 ka suggest that one major landslide occurred in the early to mid-Holocene, followed by at least one other event between 1.5 and 0.4 ka. Judging from the regional glacial chronology, all three landslides have occurred between major regional glacial advances. Whereas Alamyedin and Ukok can be considered as postglacial in this context, Aksu is of interglacial age. None of the landslide deposits show traces of glacial erosion, hence their locations and I Be ages mark maximum extents and minimum ages of glacial advances, respectively. Using toe-to-headwall altitude ratios of 0.4-0.5, we reconstruct minimum equilibrium-line altitudes that exceed previous estimates by as much as 400 m along the moister northern fringe of the Tien Shan. Our data show that deposits from large landslides can provide valuable spatio-temporal constraints for glacial advances in landscapes where moraines and glacial deposits have low preservation potential. (C) 2011 University of Washington. Published by Elsevier Inc. All rights reserved.
Context. Extrapolations of solar photospheric vector magnetograms into three-dimensional magnetic fields in the chromosphere and corona are usually done under the assumption that the fields are force-free. This condition is violated in the photosphere itself and a thin layer in the lower atmosphere above. The field calculations can be improved by preprocessing the photospheric magnetograms. The intention here is to remove a non-force-free component from the data.
Aims. We compare two preprocessing methods presently in use, namely the methods of Wiegelmann et al. (2006, Sol. Phys., 233, 215) and Fuhrmann et al. (2007, A&A, 476, 349).
Methods. The two preprocessing methods were applied to a vector magnetogram of the recently observed active region NOAA AR 10 953. We examine the changes in the magnetogram effected by the two preprocessing algorithms. Furthermore, the original magnetogram and the two preprocessed magnetograms were each used as input data for nonlinear force-free field extrapolations by means of two different methods, and we analyze the resulting fields.
Results. Both preprocessing methods managed to significantly decrease the magnetic forces and magnetic torques that act through the magnetogram area and that can cause incompatibilities with the assumption of force-freeness in the solution domain. The force and torque decrease is stronger for the Fuhrmann et al. method. Both methods also reduced the amount of small-scale irregularities in the observed photospheric field, which can sharply worsen the quality of the solutions. For the chosen parameter set, the Wiegelmann et al. method led to greater changes in strong-field areas, leaving weak-field areas mostly unchanged, and thus providing an approximation of the magnetic field vector in the chromosphere, while the Fuhrmann et al. method weakly changed the whole magnetogram, thereby better preserving patterns present in the original magnetogram. Both preprocessing methods raised the magnetic energy content of the extrapolated fields to values above the minimum energy, corresponding to the potential field. Also, the fields calculated from the preprocessed magnetograms fulfill the solenoidal condition better than those calculated without preprocessing.
The Casimir-Polder interaction between a single neutral atom and a nearby surface, arising from the (quantum and thermal) fluctuations of the electromagnetic field, is a cornerstone of cavity quantum electrodynamics (cQED), and theoretically well established. Recently, Bose-Einstein condensates (BECs) of ultracold atoms have been used to test the predictions of cQED. The purpose of the present thesis is to upgrade single-atom cQED with the many-body theory needed to describe trapped atomic BECs. Tools and methods are developed in a second-quantized picture that treats atom and photon fields on the same footing. We formulate a diagrammatic expansion using correlation functions for both the electromagnetic field and the atomic system. The formalism is applied to investigate, for BECs trapped near surfaces, dispersion interactions of the van der Waals-Casimir-Polder type, and the Bosonic stimulation in spontaneous decay of excited atomic states. We also discuss a phononic Casimir effect, which arises from the quantum fluctuations in an interacting BEC.
We analyze the equilibrium properties of a weakly interacting, trapped quasi-one-dimensional Bose gas at finite temperatures and compare different theoretical approaches. We focus in particular on two stochastic theories: a number-conserving Bogoliubov (NCB) approach and a stochastic Gross-Pitaevskii equation (SGPE) that have been extensively used in numerical simulations. Equilibrium properties like density profiles, correlation functions, and the condensate statistics are compared to predictions based upon a number of alternative theories. We find that due to thermal phase fluctuations, and the corresponding condensate depletion, the NCB approach loses its validity at relatively low temperatures. This can be attributed to the change in the Bogoliubov spectrum, as the condensate gets thermally depleted, and to large fluctuations beyond perturbation theory. Although the two stochastic theories are built on different thermodynamic ensembles (NCB, canonical; SGPE, grand-canonical), they yield the correct condensate statistics in a large Bose-Einstein condensate (BEC) (strong enough particle interactions). For smaller systems, the SGPE results are prone to anomalously large number fluctuations, well known for the grand-canonical, ideal Bose gas. Based on the comparison of the above theories to the modified Popov approach, we propose a simple procedure for approximately extracting the Penrose-Onsager condensate from first-and second-order correlation functions that is both computationally convenient and of potential use to experimentalists. This also clarifies the link between condensate and quasicondensate in the Popov theory of low-dimensional systems.
Atom chips are a promising candidate for a scalable architecture for quantum information processing provided a universal set of gates can be implemented with high fidelity. The difficult part in achieving universality is the entangling two-qubit gate. We consider a Rydberg phase gate for two atoms trapped on a chip and employ optimal control theory to find the shortest gate that still yields a reasonable gate error. Our parameters correspond to a situation where the Rydberg blockade regime is not yet reached. We discuss the role of spontaneous emission and the effect of noise from the chip surface on the atoms in the Rydberg state.
We show how the spontaneous emission rate of an excited two-level atom placed in a trapped Bose-Einstein condensate of ground-state atoms is enhanced by bosonic stimulation. This stimulation depends on the overlap of the excited matter-wave packet with the macroscopically occupied condensate wave function, and provides a probe of the spatial coherence of the Bose gas. The effect can be used to amplify the distance-dependent decay rate of an excited atom near an interface.
Physiklehrer bestimmen durch die Gestaltung des Unterrichts und damit durch ihr professionelles Handeln maßgeblich mit, wie die individuellen Lernprozesse der Schüler zu Inhalten der Physik ablaufen. Für die Entwicklung ihrer professionellen Handlungskompetenz müssen zukünftige Physiklehrer einerseits physikalisches, physikdidaktisches und pädagogisches Wissen erwerben und andererseits motiviert sein, dieses Wissen auch anzuwenden. In ihrer Vorlesung geht Thorid Rabe der Frage nach, welche physikdidaktischen Kompetenzen Studierende im Rahmen der universitären Ausbildung erwerben sollten. Am Beispiel der Lehrveranstaltung "Physikalische Schulexperimente" zeigt sie, wie physikdidaktische Theorie und praktisches Lehrerhandeln aufeinander bezogen werden können. Zudem wird sie ein Forschungsprojekt vorstellen, das einem bisher vernachlässigten Aspekt professioneller Handlungskompetenz nachgeht, nämlich den domänenspezifischen Selbstwirksamkeitserwartungen - dem Zutrauen in sich selbst, als Physiklehrer angemessen und erfolgreich handeln zu können.
Interaction of land surface processes and the atmophere in the Arctic - senitivities and extremes
(2011)
The outcomes of measurements on entangled quantum systems can be nonlocally correlated. However, while it is easy to write down toy theories allowing arbitrary nonlocal correlations, those allowed in quantum mechanics are limited. Quantum correlations cannot, for example, violate a principle known as macroscopic locality, which implies that they cannot violate Tsirelson's bound. This paper shows that there is a connection between the strength of nonlocal correlations in a physical theory and the structure of the state spaces of individual systems. This is illustrated by a family of models in which local state spaces are regular polygons, where a natural analogue of a maximally entangled state of two systems exists. We characterize the nonlocal correlations obtainable from such states. The family allows us to study the transition between classical, quantum and super-quantum correlations by varying only the local state space. We show that the strength of nonlocal correlations-in particular whether the maximally entangled state violates Tsirelson's bound or not-depends crucially on a simple geometric property of the local state space, known as strong self-duality. This result is seen to be a special case of a general theorem, which states that a broad class of entangled states in probabilistic theories-including, by extension, all bipartite classical and quantum states-cannot violate macroscopic locality. Finally, our results show that models exist that are locally almost indistinguishable from quantum mechanics, but can nevertheless generate maximally nonlocal correlations.
Entangled inputs can enhance the capacity of quantum channels, this being one of the consequences of the celebrated result showing the nonadditivity of several quantities relevant for quantum information science. In this work, we answer the converse question (whether entangled inputs can ever render noisy quantum channels to have maximum capacity) to the negative: No sophisticated entangled input of any quantum channel can ever enhance the capacity to the maximum possible value, a result that holds true for all channels both for the classical as well as the quantum capacity. This result can hence be seen as a bound as to how "nonadditive quantum information can be.'' As a main result, we find first practical and remarkably simple computable single-shot bounds to capacities, related to entanglement measures. As examples, we discuss the qubit amplitude damping and identify the first meaningful bound for its classical capacity.
We deduce a new formula for the perihelion advance Theta of a test particle in the Schwarzschild black hole by applying a newly developed nonlinear transformation within the Schwarzschild space-time. By this transformation we are able to apply the well-known formula valid in the weak-field approximation near infinity also to trajectories in the strong-field regime near the horizon of the black hole. The resulting formula has the structure Theta = c(1) - c(2) ln(c(3)(2) - e(2)) with positive constants c(1,2,3) depending on the angular momentum of the test particle. It is especially useful for orbits with large eccentricities e < c(3) < 1 showing that Theta -> infinity as e -> c(3).
We report on the detection of strongly varying intergalactic He II absorption in HST/COS spectra of two z(em) similar or equal to 3 quasars. From our homogeneous analysis of the He II absorption in these and three archival sightlines, we find a marked increase in the mean He II effective optical depth from <tau(eff, He II)> similar or equal to 1 at z similar or equal to 2.3 to <tau(eff, He II)> greater than or similar to 5 at z similar or equal to 3.2, but with a large scatter of 2 less than or similar to tau(eff, He II) less than or similar to 5 at 2.7 < z < 3 on scales of similar to 10 proper Mpc. This scatter is primarily due to fluctuations in the He II fraction and the He II-ionizing background, rather than density variations that are probed by the coeval Hi forest. Semianalytic models of He II absorption require a strong decrease in the He II-ionizing background to explain the strong increase of the absorption at z greater than or similar to 2.7, probably indicating He II reionization was incomplete at z(reion) greater than or similar to 2.7. Likewise, recent three-dimensional numerical simulations of He II reionization qualitatively agree with the observed trend only if He II reionization completes at z(reion) similar or equal to 2.7 or even below, as suggested by a large tau(eff, He II) greater than or similar to 3 in two of our five sightlines at z < 2.8. By doubling the sample size at 2.7 less than or similar to z less than or similar to 3, our newly discovered He II sightlines for the first time probe the diversity of the second epoch of reionization when helium became fully ionized.
The dynamical structure of genetic networks determines the occurrence of various biological mechanisms, such as cellular differentiation. However, the question of how cellular diversity evolves in relation to the inherent stochasticity and intercellular communication remains still to be understood. Here, we define a concept of stochastic bifurcations suitable to investigate the dynamical structure of genetic networks, and show that under stochastic influence, the expression of given proteins of interest is defined via the probability distribution of the phase variable, representing one of the genes constituting the system. Moreover, we show that under changing stochastic conditions, the probabilities of expressing certain concentration values are different, leading to different functionality of the cells, and thus to differentiation of the cells in the various types.
We present Hubble Space Telescope Observations of (596) Scheila during its recent dust outburst. The nucleus remained point-like with absolute magnitude H(V) = 8.85 +/- 0.02 in our data, equal to the pre-outburst value, with no secondary fragments of diameter >= 100m (for assumed albedos 0.04). We find a coma having a peak scattering cross section similar to 2.2x10(4) km(2), corresponding to a mass in micron-sized particles of similar to 4x10(7) kg. The particles are deflected by solar radiation pressure on projected spatial scales similar to 2x10(4) km, in the sunward direction, and swept from the vicinity of the nucleus on timescales of weeks. The coma fades by similar to 30% between observations on UT 2010 December 27 and 2011 January 4. The observed mass loss is inconsistent with an origin either by rotational instability of the nucleus or by electrostatic ejection of regolith charged by sunlight. Dust ejection could be caused by the sudden but unexplained exposure of buried ice. However, the data are most simply explained by the impact, at similar to 5 km s(-1), of a previously unknown asteroid similar to 35m in diameter.
The discovery of a plume of water vapour and ice particles emerging from warm fractures ('tiger stripes') in Saturn's small, icy moon Enceladus(1-6) raised the question of whether the plume emerges from a subsurface liquid source(6-8) or from the decomposition of ice(9-12). Previous compositional analyses of particles injected by the plume into Saturn's diffuse E ring have already indicated the presence of liquid water(8), but the mechanisms driving the plume emission are still debated(13). Here we report an analysis of the composition of freshly ejected particles close to the sources. Salt-rich ice particles are found to dominate the total mass flux of ejected solids (more than 99 per cent) but they are depleted in the population escaping into Saturn's E ring. Ice grains containing organic compounds are found to be more abundant in dense parts of the plume. Whereas previous Cassini observations were compatible with a variety of plume formation mechanisms, these data eliminate or severely constrain non-liquid models and strongly imply that a salt-water reservoir with a large evaporating surface(7,8) provides nearly all of the matter in the plume.
Importance of polar solvation for cross-reactivity of antibody and its variants with steroids
(2011)
Understanding the factors determining the binding of ligands to receptors in detail is essential for rational drug design. Here, the free energies of binding of the steroids progesterone (PRG) and 5 beta-androstane-3,17-dione (SAD) to the Diels-Alderase antibody 1E9, as well as the Leu(H47)Trp/Arg(H100)Trp 1E9 double mutant (1E9dm) and the corresponding single mutants, have been estimated and decomposed using the molecular mechanics-Poisson-Boltzmann surface area (MM-PBSA) method. Also the difference in binding free energies between the PRG-1E9dm complex and the complex of PRG with the antiprogesterone antibody DB3 have been evaluated and decomposed. The steroids bind less strongly to 1E9 than to DB3, but the mutations tend to improve the steroid affinity, in quantitative agreement with experimental data. Although the complexes formed by PRG or SAD with 1E9dm and by PRG with DB3 have similar affinity, the binding mechanisms are different. Reduced Waals for SAD-1E9dm versus PRG-1E9dm or for PRG-1E9dm versus PRG-DB3 are energetically compensated by an increased solvation of polar groups, partly contrasting previous conclusions based on structural inspection. Our study illustrates that deducing binding mechanisms from structural models alone can be misleading. Therefore, taking into account solvation effects as in MM-PBSA calculations is essential to elucidate molecular recognition.
Quantum theory (QT) is usually formulated in terms of abstract mathematical postulates involving Hilbert spaces, state vectors and unitary operators. In this paper, we show that the full formalism of QT can instead be derived from five simple physical requirements, based on elementary assumptions regarding preparations, transformations and measurements. This is very similar to the usual formulation of special relativity, where two simple physical requirements-the principles of relativity and light speed invariance-are used to derive the mathematical structure of Minkowski space-time. Our derivation provides insights into the physical origin of the structure of quantum state spaces (including a group-theoretic explanation of the Bloch ball and its three dimensionality) and suggests several natural possibilities to construct consistent modifications of QT.
Equations of Maxwell type
(2011)
For an elliptic complex of first order differential operators on a smooth manifold X, we define a system of two equations which can be thought of as abstract Maxwell equations. The formal theory of this system proves to be very similar to that of classical Maxwell's equations. The paper focuses on boundary value problems for the abstract Maxwell equations, especially on the Cauchy problem.
To asymptotic complete scattering systems {M(+) + V, M(+)} on H(+) := L(2)(R(+), K, d lambda), where M(+) is the multiplication operator on H(+) and V is a trace class operator with analyticity conditions, a decay semigroup is associated such that the spectrum of the generator of this semigroup coincides with the set of all resonances (poles of the analytic continuation of the scattering matrix into the lower half plane across the positive half line), i.e. the decay semigroup yields a "time-dependent" characterization of the resonances. As a counterpart a "spectral characterization" is mentioned which is due to the "eigenvalue-like" properties of resonances.
Within the Schwinger-Keldysh formalism we derive a Ginzburg-Landau theory for the Bose-Hubbard model which describes the real-time dynamics of the complex order parameter field. Analyzing the excitations in the vicinity of the quantum phase transitions it turns out that particle/hole dispersions in the Mott phase map continuously onto corresponding amplitude/phase excitations in the superfluid phase. Furthermore, in the superfluid phase we find a sound mode, which is in accordance with recent Bragg spectroscopy measurements in the Bogoliubov regime, as well as an additional gapped mode, which seems to have been detected via lattice modulation.
The collective dynamics of oscillator networks with phase-repulsive coupling is studied, considering various network sizes and topologies. The notion of link frustration is introduced to characterize and quantify the network dynamical states. In opposition to widely studied phase-attractive case, the properties of final dynamical states in our model critically depend on the network topology. In particular, each network's total frustration value is intimately related to its topology. Moreover, phase-repulsive networks in general display multiple final frustration states, whose statistical and stability properties are uniquely identifying them.
We present a new approach to observationally constraining the spectral energy distribution of the intergalactic UV background by studying metal absorption systems. We study single-component metal line systems that exhibit various well-measured species. Among the observed transitions, at least two ratios of ionization stages from the same element are required, e. g. C III/C IV and Si III/Si IV. For each system photoionization models are constructed by varying the spectrum of the ionizing radiation. The spectral energy distribution can then be constrained by comparing the models with the observed column density ratios. Extensive tests with artificial absorbers show that the spectrum of the ionizing radiation cannot be reconstructed unambiguously, but it is possible to constrain the main characteristics of the spectrum. Furthermore, the resulting physical parameters of the absorber, such as ionization parameter, metallicity, and relative abundances, may depend strongly on the adopted ionizing spectrum. Even in case of well-fitting models, the uncertainties can be as high as similar to 0.5 dex for the ionization parameter and up to similar to 1.5 dex for the metallicity. Therefore, it is essential to know the hardness of the UV background when estimating the metallicity of the intergalactic medium. Applying the procedure to a small sample of 3 observed single-component metal line systems yields a soft ionizing radiation at z > 2 and a slightly harder spectrum at z < 2. The resulting energy distributions exhibit strong He II Lya re-emission features, suggesting that reprocessing by intergalactic He II is important. Comparing the observed systems to UV background spectra from the literature indicates that a recent model that includes sawtooth modulation due to reprocessing by intergalactic He II with delayed helium reionization fits the investigated systems very well.
This study uses an in vitro rd10 mouse model to quantify and compare the ability of the monopolar and the (concentric) bipolar electrode configurations for subretinal stimulation. To allow for results which can be directly compared an identical region of the retina was stimulated due to the circumstance that the bipolar electrode configuration allows also for monopolar stimulation, if the concentric counter-electrode is set potential-free (floating). A ganglion cell, located centrally over the bipolar electrode configuration was selected to extracellularly record action potentials during stimulation. To analyse the recorded action potentials, we introduce a new method which combines the advantages of (a) singular value decomposition (SVD) for weighting similar modulation patterns with which the recorded action potentials are characterized and (b) multi curve fitting to identify a common threshold level, required to finally assemble a strength-duration relationship (SDR). By directly comparing the obtained SDR curves, we found that the efficiency of stimulation with the monopolar electrode configuration is significantly higher than with the bipolar electrode configuration. All obtained SDR curves were fitted using the Lapicque model to estimate the chronaxie times and the rheobase currents. Liquid inclusions, eventually separating the retina from the electrodes are discussed to be a major cause for low ganglion cell responses during stimulation with the bipolar electrode configuration.
We present a general formulation of Floquet states of periodically time-dependent open Markovian quasifree fermionic many-body systems in terms of a discrete Lyapunov equation. Illustrating the technique, we analyze periodically kicked XY spin-1/2 chain which is coupled to a pair of Lindblad reservoirs at its ends. A complex phase diagram is reported with reentrant phases of long range and exponentially decaying spin-spin correlations as some of the system's parameters are varied. The structure of phase diagram is reproduced in terms of counting nontrivial stationary points of Floquet quasiparticle dispersion relation.
Experimental Unconditional Preparation and Detection of a Continuous Bound Entangled State of Light
(2011)
Among the possibly most intriguing aspects of quantum entanglement is that it comes in free and bound instances. The existence of bound entangled states certifies an intrinsic irreversibility of entanglement in nature and suggests a connection with thermodynamics. In this Letter, we present a first unconditional, continuous-variable preparation and detection of a bound entangled state of light. We use convex optimization to identify regimes rendering its bound character well certifiable, and continuously produce a distributed bound entangled state with an extraordinary and unprecedented significance of more than 10 standard deviations away from both separability and distillability. Our results show that the approach chosen allows for the efficient and precise preparation of multimode entangled states of light with various applications in quantum information, quantum state engineering, and high precision metrology.
The NFX1-LIKE1 (NFXL1) and NFXL2 genes were identified as regulators of salt stress responses. The NFXL1 protein is a nuclear factor that positively affects adaptation to salt stress. The nfxl1-1 loss-of-function mutant displayed reduced survival rates under salt and high light stress. In contrast, the nfxl2-1 mutant, defective in the NFXL2 gene, and NFXL2-antisense plants exhibited enhanced survival under these conditions. We show here that the loss of NFXL2 function results in abscisic acid (ABA) overaccumulation, reduced stomatal conductance, and enhanced survival under drought stress. The nfxl2-1 mutant displayed reduced stomatal aperture under all conditions tested. Fusicoccin treatment, exposition to increasing light intensities, and supply of decreasing CO2 concentrations demonstrated full opening capacity of nfxl2-1 stomata. Reduced stomatal opening presumably is a consequence of elevated ABA levels. Furthermore, seedling growth, root growth, and stomatal closure were hypersensitive to exogenous ABA. The enhanced ABA responses may contribute to the improved drought stress resistance of the mutant. Three NFXL2 splice variants were cloned and named NFXL2-78, NFXL2-97, and NFXL2-100 according to the molecular weight of the putative proteins. Translational fusions to the green fluorescent protein suggest nuclear localisation of the NFXL2 proteins. Stable expression of the NFXL2-78 splice variant in nfxl2-1 plants largely complemented the mutant phenotype. Our data show that NFXL2 controls ABA levels and suppresses ABA responses. NFXL2 may prevent unnecessary and costly stress adaptation under favourable conditions.
Development of efficient business process models and determination of their characteristic properties are subject of intense interdisciplinary research. Here, we consider a business process model as a directed graph. Its nodes correspond to the units identified by the modeler and the link direction indicates the causal dependencies between units. It is of primary interest to obtain the stationary flow on such a directed graph, which corresponds to the steady-state of a firm during the business process. Following the ideas developed recently for the World Wide Web, we construct the Google matrix for our business process model and analyze its spectral properties. The importance of nodes is characterized by PageRank and recently proposed CheiRank and 2DRank, respectively. The results show that this two-dimensional ranking gives a significant information about the influence and communication properties of business model units. We argue that the Google matrix method, described here, provides a new efficient tool helping companies to make their decisions on how to evolve in the exceedingly dynamic global market.
Combining the advection-diffusion equation approach with Monte Carlo simulations we study chaperone driven polymer translocation of a stiff polymer through a nanopore. We demonstrate that the probability density function of first passage times across the pore depends solely on the Peclet number, a dimensionless parameter comparing drift strength and diffusivity. Moreover it is shown that the characteristic exponent in the power-law dependence of the translocation time on the chain length, a function of the chaperone-polymer binding energy, the chaperone concentration, and the chain length, is also effectively determined by the Peclet number. We investigate the effect of the chaperone size on the translocation process. In particular, for large chaperone size, the translocation progress and the mean waiting time as function of the reaction coordinate exhibit pronounced sawtooth-shapes. The effects of a heterogeneous polymer sequence on the translocation dynamics is studied in terms of the translocation velocity, the probability distribution for the translocation progress, and the monomer waiting times. (C) 2011 American Institute of Physics.
Synchrotron based combined in situ x-ray diffractometry and reflectometry is used to investigate the role of vacancies for the relaxation of residual stress in thin metallic Pt films. From the experimentally determined relative changes of the lattice parameter a and of the film thickness L the modification of vacancy concentration and residual strain was derived as a function of annealing time at 130 degrees C. The results indicate that relaxation of strain resulting from compressive stress is accompanied by the creation of vacancies at the free film surface. This proves experimentally the postulated dominant role of vacancies for stress relaxation in thin metal films close to room temperature.
An electronic device is suggested representing a non-autonomous dynamical system with hyperbolic chaotic attractor of Plykin type in the stroboscopic map, and the results of its simulation with software package NI MULTISIM are considered in comparison with numerical integration of the underlying differential equations. A main practical advantage of electronic devices of this kind is their structural stability that means insensitivity of the chaotic dynamics in respect to variations of functions and parameters of elements constituting the system as well as to interferences and noises.