Refine
Year of publication
Document Type
- Article (237)
- Doctoral Thesis (143)
- Conference Proceeding (122)
- Postprint (69)
- Working Paper (39)
- Monograph/Edited Volume (16)
- Preprint (6)
- Review (6)
- Master's Thesis (5)
- Habilitation Thesis (2)
Language
- English (647) (remove)
Keywords
- climate change (8)
- USA (7)
- United States (7)
- Arktis (6)
- moderne jüdische Geschichte (6)
- Arctic (5)
- COVID-19 (5)
- Fernerkundung (5)
- football (5)
- modern Jewish history (5)
Institute
- Extern (647) (remove)
We review the effects of clumping on the profiles of resonance doublets. By allowing the ratio of the doublet oscillator strenghts to be a free parameter, we demonstrate that doublet profiles contain more information than is normally utilized. In clumped (or porous) winds, this ratio can lies between unity and the ratio of the f-values, and can change as a function of velocity and time, depending on the fraction of the stellar disk that is covered by material moving at a particular velocity at a given moment. Using these insights, we present the results of SEI modeling of a sample of B supergiants, ζ Pup and a time series for a star whose terminal velocity is low enough to make the components of its Si VIλλ1400 independent. These results are interpreted within the framewrok of the Oskinova et al. (2007) model, and demonstrate how the doublet profiles can be used to extract infromation about wind structure.
We present XMM-Newton Reflection Grating Spectrometer observations of pairs of X-ray emission line profiles from the O star ζ Pup that originate from the same He-like ion. The two profiles in each pair have different shapes and cannot both be consistently fit by models assuming the same wind parameters. We show that the differences in profile shape can be accounted for in a model including the effects of resonance scattering, which affects the resonance line in the pair but not the intercombination line. This implies that resonance scattering is also important in single resonance lines, where its effect is difficult to distinguish from a low effective continuum optical depth in the wind. Thus, resonance scattering may help reconcile X-ray line profile shapes with literature mass-loss rates.
We summarize Chandra observations of the emission line profiles from 17 OB stars. The lines tend to be broad and unshifted. The forbidden/intercombination line ratios arising from Helium-like ions provide radial distance information for the X-ray emission sources, while the H-like to He-like line ratios provide X-ray temperatures, and thus also source temperature versus radius distributions. OB stars usually show power law differential emission measure distributions versus temperature. In models of bow shocks, we find a power law differential emission measure, a wide range of ion stages, and the bow shock flow around the clumps provides transverse velocities comparable to HWHM values. We find that the bow shock results for the line profile properties, consistent with the observations of X-ray line emission for a broad range of OB star properties.
We present one-dimensional, time-dependent models of the clumps generated by the linedeshadowing instability. In order to follow the clumps out to distances of more than 1000 R∗, we use an efficient moving-box technique. We show that, within the approximations, the wind can remain clumped well into the formation region of the radio continuum.
INTEGRAL tripled the number of super-giant high-mass X-ray binaries (sgHMXB) known in the Galaxy by revealing absorbed and fast transient (SFXT) systems. Quantitative constraints on the wind clumping of massive stars can be obtained from the study of the hard X-ray variability of SFXT. A large fraction of the hard X-ray emission is emitted in the form of flares with a typical duration of 3 ksec, frequency of 7 days and luminosity of $10^{36}$ erg/s. Such flares are most probably emitted by the interaction of a compact object orbiting at $\sim10~R_*$ with wind clumps ($10^{22 ... 23}$ g) representing a large fraction of the stellar mass-loss rate. The density ratio between the clumps and the inter-clump medium is $10^{2 ... 4}$. The parameters of the clumps and of the inter-clump medium, derived from the SFXT flaring behavior, are in good agreement with macro-clumping scenario and line-driven instability simulations. SFXT are likely to have larger orbital radius than classical sgHMXB.
Magnetic fields influence the dynamics of hot-star winds and create large scale structure. Based on numerical magnetohydrodynamic (MHD) simulations, we model the wind of θ¹ Ori C, and then use the SEI method to compute synthetic line profiles for a range of viewing angles as function of rotational phase. The resulting dynamic spectrum for a moderately strong line shows a distinct modulation, but with a phase that seems at odds with available observations.
Discussion : X-rays
(2007)
Dynamical simulation of the “velocity-porosity” reduction in observed strength of stellar wind lines
(2007)
I use dynamical simulations of the line-driven instability to examine the potential role of the resulting flow structure in reducing the observed strength of wind absorption lines. Instead of the porosity length formalism used to model effects on continuum absorption, I suggest reductions in line strength can be better characterized in terms of a velocity clumping factor that is insensitive to spatial scales. Examples of dynamic spectra computed directly from instability simulations do exhibit a net reduction in absorption, but only at a modest 10-20% level that is well short of the ca. factor 10 required by recent analyses of PV lines.
The James Webb Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2013. JWST will find the first stars and galaxies that formed in the early universe, connecting the Big Bang to our own Milky Way galaxy. JWST will peer through dusty clouds to see stars forming planetary systems, connecting the MilkyWay to our own Solar System. JWST’s instruments are designed to work primarily in the infrared range of 1 - 28 μm, with some capability in the visible range. JWST will have a large mirror, 6.5 m in diameter, and will be diffraction-limited at 2 μm (0.1 arcsec resolution). JWST will be placed in an L2 orbit about 1.5 million km from the Earth. The instruments will provide imaging, coronography, and multi-object and integral-field spectroscopy across the 1 - 28 μm wavelength range. The breakthrough capabilities of JWST will enable new studies of massive star winds from the Milky Way to the early universe.
General Discussion
(2007)
We study the influence of clumping on the predicted wind structure of O-type stars. For this purpose we artificially include clumping into our stationary wind models. When the clumps are assumed to be optically thin, the radiative line force increases compared to corresponding unclumped models, with a similar effect on either the mass-loss rate or the terminal velocity (depending on the onset of clumping). Optically thick clumps, alternatively, might be able to decrease the radiative force.
We present the results of Monte Carlo mass-loss predictions for massive stars covering a wide range of stellar parameters. We critically test our predictions against a range of observed massloss rates – in light of the recent discussions on wind clumping. We also present a model to compute the clumping-induced polarimetric variability of hot stars and we compare this with observations of Luminous Blue Variables, for which polarimetric variability is larger than for O and Wolf-Rayet stars. Luminous Blue Variables comprise an ideal testbed for studies of wind clumping and wind geometry, as well as for wind strength calculations, and we propose they may be direct supernova progenitors.
Many hot stars exhibit stochastic polarimetric variability, thought to arise from clumping low in the wind. Here we investigate the wind properties required to reproduce this variability using analytic models, with particular emphasis on Luminous Blue Variables. We find that the winds must be highly structured, consisting of a large number of optically-thin clumps; while we find that the overall level of polarization should scale with mass-loss rate – consistent with observations of LBVs. The models also predict variability on very short timescales, which is supported by the results of a recent polarimetric monitoring campaign.
Overwhelming observational and theoretical evidence suggests that the winds of massive stars are highly clumped. We briefly discuss the influence of clumping on model diagnostics and the difficulties of allowing for the influence of clumping on model spectra. Because of its simplicity, and because of computational ease, most spectroscopic analyses incorporate clumping using the volume filling factor. The biases introduced by this approach are uncertain. To investigate alternative clumping models, and to help determine the validity of parameters derived using the volume filling factor method, we discuss results derived using an alternative model in which we assume that the wind is composed of optically thick shells.
We report FUSE observations in 2005–2006 of three O-type, double-lined spectroscopic binaries in the Magellanic Clouds. The systems have very short periods (1.4–2.25 d), represent rare, young evolutionary stages of massive stars and binaries, and provide a unique glimpse at some of the most massive systems that form in dense clusters of massive stars. Improved orbit parameters, including revised masses, for LH54-425 are derived from new ctio spectroscopy. The systems are: LH54-425 in the LMC (O3V + O5V, P=2.25d, 62+37M⊙), J053441-693139 in the LMC (O2-3If+O6V, P=1.4 d, 41+27M⊙), and Hodge 53-47 in the SMC (O6V + O4-5IIIf, P=2.2 d, 24+14M⊙, where the O4 star appears to be less massive than the O6 star). Their short periods indicates that wind interaction and mass transfer are likely important factors in their evolution. The spectra provide quantitative and systematic studies of phase-dependent stellar wind properties, wind collision effects in O+O binaries at lower metallicities, improved radial velocity curves, and FUV spectro-photometric changes as a function of orbital phase.
We present preliminary results of a tailored atmosphere analysis of six Galactic WC stars using UV, optical, and mid-infrared Spitzer IRS data. With these data, we are able to sample regions from 10 to 10³ stellar radii, thus to determine wind clumping in different parts of the wind. Ultimately, derived wind parameters will be used to accuratelymeasure neon abundances, and to so test predicted nuclear-reaction rates.
Mass accretion onto compact objects through accretion disks is a common phenomenon in the universe. It is seen in all energy domains from active galactic nuclei through cataclysmic variables (CVs) to young stellar objects. Because CVs are fairly easy to observe, they provide an ideal opportunity to study accretion disks in great detail and thus help us to understand accretion also in other energy ranges. Mass accretion in these objects is often accompanied by mass outflow from the disks. This accretion disk wind, at least in CVs, is thought to be radiatively driven, similar to O star winds. WOMPAT, a 3-D Monte Carlo radiative transfer code for accretion disk winds of CVs is presented.
We apply the 3-dimensional radiative transport codeWind3D to 3D hydrodynamic models of Corotating Interaction Regions to fit the detailed variability of Discrete Absorption Components observed in Si iv UV resonance lines of HD 64760 (B0.5 Ib). We discuss important effects of the hydrodynamic input parameters on these large-scale equatorial wind structures that determine the detailed morphology of the DACs computed with 3D transfer. The best fit model reveals that the CIR in HD 64760 is produced by a source at the base of the wind that lags behind the stellar surface rotation. The non-corotating coherent wind structure is an extended density wave produced by a local increase of only 0.6% in the smooth symmetric wind mass-loss rate.
Clumping in Galactic WN stars : a comparison of mass loss rates from UV/optical & radio diagnostics
(2007)
The mass loss rates and other parameters for a large sample of Galactic WN stars have been revised by Hamann et al. (2006), using the most up-to date Potsdam Wolf-Rayet (PoWR) model atmospheres. For a sub-sample of these stars exist measurements of their radio free-free emission. After harmonizing the adopted distance and terminal wind velocities, we compare the mass loss rates obtained from the two diagnostics. The differences are discussed as a possible consequence of different clumping contrast in the line-forming and radio-emitting regions.
Recent studies of massive O-type stars present clear evidences of inhomogeneous and clumped winds. O-type (H-rich) central stars of planetary nebulae (CSPNs) are in some ways the low mass–low luminosity analogous of those massive stars. In this contribution, we present preliminary results of our on-going multi-wavelength (FUV, UV and optical) study of the winds of Galactic CSPNs. Particular emphasis will be given to the clumping factors derived by means of optical lines (Hα and Heii 4686) and “classic” FUV (and UV) lines.
We exploit time-series $FUSE$ spectroscopy to {\it uniquely} probe spatial structure and clumping in the fast wind of the central star of the H-rich planetary nebula NGC~6543 (HD~164963). Episodic and recurrent optical depth enhancements are discovered in the P{\sc v} absorption troughs, with some evidence for a $\sim$ 0.17-day modulation time-scale. The characteristics of these features are essentially identical to the discrete absorption components' (DACs) commonly seen in the UV lines of massive OB stars, suggesting the temporal structures seen in NGC~6543 likely have a physical origin that is similar to that operating in massive, luminous stars. The mechanism for forming coherent perturbations in the outflows is therefore apparently operating equally in the radiation-pressure-driven winds of widely differing momenta ($\mdot$$v_\infty$$R_\star^{0.5}$) and flow times, as represented by OB stars and CSPN.
This paper outlines a newly-developed method to include the effects of time variability in the radiative transfer code CMFGEN. It is shown that the flow timescale is often large compared to the variability timescale of LBVs. Thus, time-dependent effects significantly change the velocity law and density structure of the wind, affecting the derivation of the mass-loss rate, volume filling factor, wind terminal velocity, and luminosity. The results of this work are directly applicable to all active LBVs in the Galaxy and in the LMC, such as AG Car, HR Car, S Dor and R 127, and could result in a revision of stellar and wind parameters. The massloss rate evolution of AG Car during the last 20 years is presented, highlighting the need for time-dependent models to correctly interpret the evolution of LBVs.
We discuss the results of time-resolved spectroscopy of three presumably single Population I Wolf-Rayet stars in the Small Magellanic Cloud, where the ambient metallicity is $\sim 1/5 Z_\odot$. We were able to detect and follow numerous small-scale wind-embedded inhomogeneities in all observed stars. The general properties of the moving features, such as their velocity dispersions, emissivities and average accelerations, closely match the corresponding characteristics of small-scale inhomogeneities in the winds of Galactic Wolf-Rayet stars.
The influence of the wind to the total continuum of OB supergiants is discussed. For wind velocity distributions with β > 1.0, the wind can have strong influence to the total continuum emission, even at optical wavelengths. Comparing the continuum emission of clumped and unclumped winds, especially for stars with high β values, delivers flux differences of up to 30% with maximum in the near-IR. Continuum observations at these wavelengths are therefore an ideal tool to discriminate between clumped and unclumped winds of OB supergiants.
Massive stars usually form groups such as OB associations. Their fast stellar winds sweep up collectively the surrounding insterstellar medium (ISM) to generate superbubbles. Observations suggest that superbubble evolution on the surrounding ISM can be very irregular. Numerical simulations considering these conditions could help to understand the evolution of these superbubbles and to clarify the dynamics of these objects as well as the difference between observed X-ray luminosities and the predicted ones by the standard model (Weaver et al. 1977).
We present the latest results on the observational dependence of the mass-loss rate in stellar winds of O and early-B stars on the metal content of their atmospheres, and compare these with predictions. Absolute empirical rates for the mass loss of stars brighter than 10$^{5.2} L_{\odot}$, based on H$\alpha$ and ultraviolet (UV) wind lines, are found to be about a factor of two higher than predictions. If this difference is attributed to inhomogeneities in the wind this would imply that luminous O and early-B stars have clumping factors in their H$\alpha$ and UV line forming regime of about a factor of 3--5. The investigated stars cover a metallicity range $Z$ from 0.2 to 1 $Z_{\odot}$. We find a hint towards smaller clumping factors for lower $Z$. The derived clumping factors, however, presuppose that clumping does not impact the predictions of the mass-loss rate. We discuss this assumption and explain how we intend to investigate its validity in more detail.
We report on new mass-loss rate estimates for O stars in six massive binaries using the amplitude of orbital-phase dependent, linear-polarimetric variability caused by electron scattering off free electrons in the winds. Our estimated mass-loss rates for luminous O stars are independent of clumping. They suggest similar clumping corrections as for WR stars and do not support the recently proposed reduction in mass-loss rates of O stars by one or two orders of magnitude.
Clumping in O-star winds
(2007)
We have analyzed the spectra of seven Galactic O4 supergiants, with the NLTE wind code CMFGEN. For all stars, we have found that clumped wind models match well lines from different species spanning a wavelength range from FUV to optical, and remain consistent with Hα data. We have achieved an excellent match of the P V λλ1118, 1128 resonance doublet and N IV λ1718, as well as He II λ4686 suggesting that our physical description of clumping is adequate. We find very small volume filling factors and that clumping starts deep in the wind, near the sonic point. The most crucial consequence of our analysis is that the mass loss rates of O stars need to be revised downward significantly, by a factor of 3 and more compared to those obtained from smooth-wind models.
I discuss observational evidence – independent of the direct spectral diagnostics of stellar winds themselves – suggesting that mass-loss rates for O stars need to be revised downward by roughly a factor of three or more, in line with recent observed mass-loss rates for clumped winds. These independent constraints include the large observed mass-loss rates in LBV eruptions, the large masses of evolved massive stars like LBVs and WNH stars, WR stars in lower metallicity environments, observed rotation rates of massive stars at different metallicity, supernovae that seem to defy expectations of high mass-loss rates in stellar evolution, and other clues. I pay particular attention to the role of feedback that would result from higher mass-loss rates, driving the star to the Eddington limit too soon, and therefore making higher rates appear highly implausible. Some of these arguments by themselves may have more than one interpretation, but together they paint a consistent picture that steady line-driven winds of O-type stars have lower mass-loss rates and are significantly clumped.
The P v λλ1118, 1128 resonance doublet is an extraordinarily useful diagnostic of O-star winds, because it bypasses the traditional problems associated with determining mass-loss rates from UV resonance lines. We discuss critically the assumptions and uncertainties involved with using P v to diagnose mass-loss rates, and conclude that the large discrepancies between massloss rates determined from P v and the rates determined from “density squared” emission processes pose a significant challenge to the “standard model” of hot-star winds. The disparate measurements can be reconciled if the winds of O-type stars are strongly clumped on small spatial scales, which in turn implies that mass-loss rates based on Hα or radio emission are too large by up to an order of magnitude.
Significant seasonal variation in size at settlement has been observed in newly settled larvae of Dreissena polymorpha in Lake Constance. Diet quality, which varies temporally and spatially in freshwater habitats, has been suggested as a significant factor influencing life history and development of freshwater invertebrates. Accordingly, experiments were conducted with field-collected larvae to test the hypothesis that diet quality can determine planktonic larval growth rates, size at settlement and subsequent post-metamorphic growth rates. Larvae were fed one of two diets or starved. One diet was composed of cyanobacterial cells which are deficient in polyunsaturated fatty acids (PUFAs), and the other was a mixed diet rich in PUFAs. Freshly metamorphosed animals from the starvation treatment had a carbon content per individual 70% lower than that of larvae fed the mixed diet. This apparent exhaustion of larval internal reserves resulted in a 50% reduction of the postmetamorphic growth rates. Growth was also reduced in animals previously fed the cyanobacterial diet. Hence, low food quantity or low food quality during the larval stage of D. polymorpha lead to irreversible effects for postmetamorphic animals, and is related to inferior competitive abilities.
In the old days (pre ∼1990) hot stellar winds were assumed to be smooth, which made life fairly easy and bothered no one. Then after suspicious behaviour had been revealed, e.g. stochastic temporal variability in broadband polarimetry of single hot stars, it took the emerging CCD technology developed in the preceding decades (∼1970-80’s) to reveal that these winds were far from smooth. It was mainly high-S/N, time-dependent spectroscopy of strong optical recombination emission lines in WR, and also a few OB and other stars with strong hot winds, that indicated all hot stellar winds likely to be pervaded by thousands of multiscale (compressible supersonic turbulent?) structures, whose driver is probably some kind of radiative instability. Quantitative estimates of clumping-independent mass-loss rates came from various fronts, mainly dependent directly on density (e.g. electron-scattering wings of emission lines, UV spectroscopy of weak resonance lines, and binary-star properties including orbital-period changes, electron-scattering, and X-ray fluxes from colliding winds) rather than the more common, easier-to-obtain but clumping-dependent density-squared diagnostics (e.g. free-free emission in the IR/radio and recombination lines, of which the favourite has always been Hα). Many big questions still remain, such as: What do the clumps really look like? Do clumping properties change as one recedes from the mother star? Is clumping universal? Does the relative clumping correction depend on $\dot{M}$ itself?
Mass loss is a very important aspect of the life of massive stars. After briefly reviewing its importance, we discuss the impact of the recently proposed downward revision of mass loss rates due to clumping (difficulty to form Wolf-Rayet stars and production of critically rotating stars). Although a small reduction might be allowed, large reduction factors around ten are disfavoured. We then discuss the possibility of significant mass loss at very low metallicity due to stars reaching break-up velocities and especially due to the metal enrichment of the surface of the star via rotational and convective mixing. This significant mass loss may help the first very massive stars avoid the fate of pair-creation supernova, the chemical signature of which is not observed in extremely metal poor stars. The chemical composition of the very low metallicity winds is very similar to that of the most metal poor star known to date, HE1327-2326 and offer an interesting explanation for the origin of the metals in this star. We also discuss the importance of mass loss in the context of long and soft gamma-ray bursts and pair-creation supernovae. Finally, we would like to stress that mass loss in cooler parts of the HR-diagram (luminous blue variable and yellow and red supergiant stages) are much more uncertain than in the hot part. More work needs to be done in these areas to better constrain the evolution of the most massive stars.
The factors that determine the efficiency of energy transfer in aquatic food webs have been investigated for many decades. The plant-animal interface is the most variable and least predictable of all levels in the food web. In order to study determinants of food quality in a large lake and to test the recently proposed central importance of the long-chained eicosapentaenoic acid (EPA) at the pelagic producer-grazer interface, we tested the importance of polyunsaturated fatty acids (PUFAs) at the pelagic producerconsumer interface by correlating sestonic food parameters with somatic growth rates of a clone of Daphnia galeata. Daphnia growth rates were obtained from standardized laboratory experiments spanning one season with Daphnia feeding on natural seston from Lake Constance, a large pre-alpine lake. Somatic growth rates were fitted to sestonic parameters by using a saturation function. A moderate amount of variation was explained when the model included the elemental parameters carbon (r2 = 0.6) and nitrogen (r2 = 0.71). A tighter fit was obtained when sestonic phosphorus was incorporated (r2 = 0.86). The nonlinear regression with EPA was relatively weak (r2 = 0.77), whereas the highest degree of variance was explained by three C18-PUFAs. The best (r2 = 0.95), and only significant, correlation of Daphnia's growth was found with the C18-PUFA α-linolenic acid (α-LA; C18:3n-3). This correlation was weakest in late August when C:P values increased to 300, suggesting that mineral and PUFA-limitation of Daphnia's growth changed seasonally. Sestonic phosphorus and some PUFAs showed not only tight correlations with growth, but also with sestonic α-LA content. We computed Monte Carlo simulations to test whether the observed effects of α-LA on growth could be accounted for by EPA, phosphorus, or one of the two C18-PUFAs, stearidonic acid (C18:4n-3) and linoleic acid (C18:2n-6). With >99 % probability, the correlation of growth with α-LA could not be explained by any of these parameters. In order to test for EPA limitation of Daphnia's growth, in parallel with experiments on pure seston, growth was determined on seston supplemented with chemostat-grown, P-limited Stephanodiscus hantzschii, which is rich in EPA. Although supplementation increased the EPA content 80-800x, no significant changes in the nonlinear regression of the growth rates with α-LA were found, indicating that growth of Daphnia on pure seston was not EPA limited. This indicates that the two fatty acids, EPA and α-LA, were not mutually substitutable biochemical resources and points to different physiological functions of these two PUFAs. These results support the PUFA-limitation hypothesis for sestonic C:P < 300 but are contrary to the hypothesis of a general importance of EPA, since no evidence for EPA limitation was found. It is suggested that the resource ratios of EPA and α-LA rather than the absolute concentrations determine which of the two resources is limiting growth.
4-Phenylphenoxazinones were isolated after biomimetic oxidation, using diphenoloxidases of insect cuticle, mushroom tyrosinase, or after autoxidation of N-acetyldopamine (Image ) in the presence of β-alanine, β-alanine methyl ester or N-acetyl-L-lysine. They are formed presumably by addition of 2-aminoalkyl-5-alkylphenols to the o-quinone of biphenyltetrol which, in turn, arises from oxidative coupling of. The structures of present the first examples for the assembly of reasonably stable intermediates in the rather complex process of chemical modifications of aliphatic amino acid residues by o-quinones.
Amphiphilic derivatives of octadiene and docosadiene were investigated in monolayers and Langmuir-Blodgett multilayers, with respect to their self-organization and their polymerization behavior. All amphiphiles investigated form monolayers. However, only acid and alcohol derivatives were able to build up multilayers. Those multilayers are rapidly photopolymerized in the layers via a two-step process: Irradiation with long-wavelength UV light yields soluble polymers, whereas additional irradiation with sfiort-wavelength UV light produces insoluble and presumably cross-linked polymers. The reaction meclianism is discussed according to the polymer characterization by UV spectroscopy, small-angle X-ray scattering, NMR spectroscopy, and gel permeation chromatography. All multilayers undergo structural changes during the polymerization; substantial changes result in defects in the polymerized layers as observed by scanning electron microscopy. In contrast to the acids and alcohols, the deposition of monolayers of the aldehyde derivatives did not yield well-ordered multilayers, but rather amorphous films. In this different film structure, the photopolymerization process differs from the one observed in multilayers.
Stellar winds play an important role for the evolution of massive stars and their cosmic environment. Multiple lines of evidence, coming from spectroscopy, polarimetry, variability, stellar ejecta, and hydrodynamic modeling, suggest that stellar winds are non-stationary and inhomogeneous. This is referred to as 'wind clumping'. The urgent need to understand this phenomenon is boosted by its far-reaching implications. Most importantly, all techniques to derive empirical mass-loss rates are more or less corrupted by wind clumping. Consequently, mass-loss rates are extremely uncertain. Within their range of uncertainty, completely different scenarios for the evolution of massive stars are obtained. Settling these questions for Galactic OB, LBV and Wolf-Rayet stars is prerequisite to understanding stellar clusters and galaxies, or predicting the properties of first-generation stars. In order to develop a consistent picture and understanding of clumped stellar winds, an international workshop on 'Clumping in Hot Star Winds' was held in Potsdam, Germany, from 18. - 22. June 2007. About 60 participants, comprising almost all leading experts in the field, gathered for one week of extensive exchange and discussion. The Scientific Organizing Committee (SOC) included John Brown (Glasgow), Joseph Cassinelli (Madison), Paul Crowther (Sheffield), Alex Fullerton (Baltimore), Wolf-Rainer Hamann (Potsdam, chair), Anthony Moffat (Montreal), Stan Owocki (Newark), and Joachim Puls (Munich). These proceedings contain the invited and contributed talks presented at the workshop, and document the extensive discussions.
The topography of first-order catchments in a region of western Amazonia was found to exhibit distinctive, recurrent features: a steep, straight lower side slope, a flat or nearly flat terrace at an intermediate elevation between valley floor and interfluve, and an upper side slope connecting interfluve and intermediate terrace. A detailed survey of soil-saturated hydraulic conductivity (K sat)-depth relationships, involving 740 undisturbed soil cores, was conducted in a 0.75-ha first-order catchment. The sampling approach was stratified with respect to the above slope units. Exploratory data analysis suggested fourth-root transformation of batches from the 0–0.1 m depth interval, log transformation of batches from the subsequent 0.1 m depth increments, and the use of robust estimators of location and scale. The K sat of the steep lower side slope decreased from 46 to 0.1 mm/h over the overall sampling depth of 0.4 m. The corresponding decrease was from 46 to 0.1 mm/h on the intermediate terrace, from 335 to 0.01 mm/h on the upper side slope, and from 550 to 0.015 mm/h on the interfluve. A depthwise comparison of these slope units led to the formulation of several hypotheses concerning the link between K sat and topography.
Rainfall erosivities as defined by the R factor from the universal soil loss equation were determined for all events during a two-year period at the station La Cuenca in western Amazonia. Three methods based on a power relationship between rainfall amount and erosivity were then applied to estimate event and daily rainfall erosivities from the respective rainfall amounts. A test of the resulting regression equations against an independent data set proved all three methods equally adequate in predicting rainfall erosivity from daily rainfall amount. We recommend the Richardson model for testing in the Amazon Basin, and its use with the coefficient from La Cuenca in western Amazonia.
Previous hydrometric studies demonstrated the prevalence of overland flow as a hydrological pathway in the tropical rain forest catchment of South Creek, northeast Queensland. The purpose of this study was to consider this information in a mixing analysis with the aim of identifying sources of, and of estimating their contribution to, storm flow during two events in February 1993. K and acid-neutralizing capacity (ANC) were used as tracers because they provided the best separation of the potential sources, saturation overland flow, soil water from depths of 0.3, 0.6, and 1.2 m, and hillslope groundwater in a two-dimensional mixing plot. It was necessary to distinguish between saturation overland flow, generated at the soil surface and following unchanneled pathways, and overland flow in incised pathways. This latter type of overland flow was a mixture of saturation overland flow (event water) with high concentrations of K and a low ANC, soil water (preevent water) with low concentrations of K and a low ANC, and groundwater (preevent water) with low concentrations of K and a high ANC. The same sources explained the streamwater chemistry during the two events with strongly differing rainfall and antecedent moisture conditions. The contribution of saturation overland flow dominated the storm flow during the first, high-intensity, 178-mm event, while the contribution of soil water reached 50% during peak flow of the second, low-intensity, 44-mm event 5 days later. This latter result is remarkably similar to soil water contributions to storm flow in mountainous forested catchments of the southeastern United States. In terms of event and preevent water the storm flow hydrograph of the high-intensity event is dominated by event water and that of the low-intensity event by preevent water. This study highlights the problems of applying mixing analyses to overland flow-dominated catchments and soil environments with a poorly developed vertical chemical zonation and emphasizes the need for independent hydrometric information for a complete characterization of watershed hydrology and chemistry.
Chemical fingerprints of hydrological compartments and flow paths at La Cuenca, western Amazonia
(1995)
A forested first-order catchment in western Amazonia was monitored for 2 years to determine the chemical fingerprints of precipitation, throughfall, overland flow, pipe flow, soil water, groundwater, and streamflow. We used five tracers (hydrogen, calcium, magnesium, potassium, and silica) to distinguish “fast” flow paths mainly influenced by the biological subsystem from “slow” flow paths in the geochemical subsystem. The former comprise throughfall, overland flow, and pipe flow and are characterized by a high potassium/silica ratio; the latter are represented by soil water and groundwater, which have a low potassium/silica ratio. Soil water and groundwater differ with respect to calcium and magnesium. The groundwater-controlled streamflow chemistry is strongly modified by contributions from fast flow paths during precipitation events. The high potassium/silica ratio of these flow paths suggests that the storm flow response at La Cuenca is dominated by event water.
Earlier investigations at South Creek in northeastern Queensland established the importance of overland flow as a hydrologic pathway in this tropical rainforest environment. Since this pathway is ‘fast’, transmitting presumably ‘new’ water, its importance should be reflected in the stormflow chemistry of South Creek: the greater the volumentric contribution to the stormflow hydrograph, the more similarity between the chemical composition of streamwater and of overland flow is to be expected. Water samples were taken during two storm events in an ephemeral gully (gully A), an intermittent gully (gully B) and at the South Creek catchment outlet; additional spot checks were made in several poorly defined rills. The chemical composition of ‘old’ water was determined from 45 baseflow samples collected throughout February. The two events differed considerably in their magnitudes, intensities and antecedent moisture conditions. In both events, the stormflow chemistry in South Creek was characterized by a sharp decrease in Ca, Mg, Na, Si, Cl, EC, ANC, alkalinity and total inorganic carbon. pH remained nearly constant with discharge, whereas K increased sharply, as did sulfate in an ill-defined manner. In event 1, this South Creek stormflow pattern was closely matched by the pattern in gully A, implying a dominant contribution of ‘new’ water. This match was confirmed by the spot samples from rills. Gully B behaved like South Creek itself, but with a dampened ‘new’ water signal, indicating less overland flow generation in its subcatchment. In event 2, which occurred five days later, the initial ‘new’ water signal in gully A was rapidly overwhelmed by a different signal which is attributed to rapid drainage from a perched water table. This study shows that stormflow in this rainforest catchment consists predominantly of ‘new’ water which reaches the stream channel via ‘fast’ pathways. Where the ephemeral gullies delivering overland flow are incised deeply enough to intersect a perched water table, a delayed, ‘old’ water-like signal may be transmitted.
Just and Carpenter (1980) presented a theory of reading based on eye fixations wherein their "psycholinguistic" variables accounted for 72% of the variance in word gaze durations. This comment raises some statistical and theoretical problems with their use of simultaneous regression analysis of gaze duration measures and with the resulting theory of reading. A major problem was the confounding of perceptual with psycholinguistic factors. New eye fixation data are presented to support these criticisms. Analysis of fixations within words revealed that most gaze duration variance was contributed by number of fixations rather than by fixation duration.
The complement fragments C3a and C5a were purified from zymosan-activated human serum by column chromatographic procedures after the bulk of the proteins had been removed by acidic polyethylene glycol precipitation. In the isolated in situ perfused rat liver C3a increased glucose and lactate output and reduced flow. Its effects were enhanced in the presence of the carboxypeptidase inhibitor DL-mercaptomethyl-3-guanidinoethylthio-propanoic acid (MERGETPA) and abolished by preincubation of the anaphylatoxin with carboxypeptidase B or with Fab fragments of an anti-C3a monoclonal antibody. The C3a effects were partially inhibited by the thromboxane antagonist BM13505. C5a had no effect. It is concluded that locally but not systemically produced C3a may play an important role in the regulation of local metabolism and hemodynamics during inflammatory processes in the liver.
In the isolated rat liver perfused in situ stimulation of the nerve bundles around the portal vein and the hepatic artery caused an increase of urate formation that was inhibited by the α1-blocker prazosine and the xanthine oxidase inhibitor allopurinol. Moreover, nerve stimulation increased glucose and lactate output and decreased perfusion flow. Infusion of noradrenaline had similar effects. Compared to nerve stimulation infusion of glucagon led to a less pronounced increase of urate formation and a twice as large increase in glucose output but a decrease in lactate release without affecting the flow rate. Insulin had no effect on any of the parameters studied.
Increase in prostanoid formation in rat liver macrophages (Kupffer cells) by human anaphylatoxin C3a
(1993)
Human anaphylatoxin C3a increases glycogenolysis in perfused rat liver. This action is inhibited by prostanoid synthesis inhibitors and prostanoid antagonists. Because prostanoids but not anaphylatoxin C3a can increase glycogenolysis in hepatocytes, it has been proposed that prostanoid formation in nonparenchymal cells represents an important step in the C3a-dependent increase in hepatic glycogenolysis. This study shows that (a) human anaphylatoxin C3a (0.1 to 10 mug/ml) dose-dependently increased prostaglandin D2, thromboxane B, and prostaglandin F2alpha formation in rat liver macrophages (Kupffer cells); (b) the C3a-mediated increase in prostanoid formation was maximal after 2 min and showed tachyphylaxis; and (c) the C3a-elicited prostanoid formation could be inhibited specifically by preincubation of C3a with carboxypeptidase B to remove the essential C-terminal arginine or by preincubation of C3a with Fab fragments of a neutralizing monoclonal antibody. These data support the hypothesis that the C3a-dependent activation of hepatic glycogenolysis is mediated by way of a C3a-induced prostanoid production in Kupffer cells.
The effect of moderate rates of nitrogen deposition on ground floor vegetation is poorly predicted by uncontrolled surveys or fertilization experiments using high rates of nitrogen (N) addition. We compared the temporal trends of ground floor vegetation in permanent plots with moderate (7–13 kg ha−1 year−1) and lower bulk N deposition (4–6 kg ha−1 year−1) in southern Sweden during 1982–1998. We examined whether trends differed between growth forms (vascular plants and bryophytes) and vegetation types (three types of coniferous forest, deciduous forest, and bog). Trends of site-standardized cover and richness varied among growth forms, vegetation types, and deposition regions. Cover in spruce forests decreased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs cover decreased faster with low deposition. Cover of bryophytes in spruce forests increased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs and deciduous forests there was a strong non-linear increase with moderate deposition. The trend of number of vascular plants was constant with moderate and decreased with low deposition. We found no trend in the number of bryophyte species. We propose that the decrease of cover and number with low deposition was related to normal ecosystem development (increased shading), suggesting that N deposition maintained or increased the competitiveness of some species in the moderate-deposition region. Deposition had no consistent negative effect on vegetation suggesting that it is less important than normal successional processes.
Today about 24 Million people worldwide suffer from dementia, Alzheimer’s Disease accounts for approximately 50-60% of all dementia cases. As the prevalence of dementia grows with increasing age Alzheimer’s Disease becomes more and more of an issue for society as the proportion of elderly people increases from year to year. It is well established, that the amino acid glutamate - quantitatively being the most important neurotransmitter in the central nervous system (CNS) - may reach toxic concentrations if not cleared from the synaptic cleft into which it is released during transmittance of action potentials. In Alzheimer’s Disease there is strong evidence for a generally impaired glutamate uptake system which in turn is thought to result in toxic levels of the amino acid with the potential to kill off neurons. The excitatory amino acid transporter 1 (EAAT1) belongs to the family of Na+-dependent glutamate transporter and accounts together with EAAT2 for most of the glutamate uptake in the CNS. In this project a new splice variant of EAAT1, skipping exon 3 was detected in human brain samples and subsequently called EAAT1Δ3, this being the second splice variant found after the recent detection of EAAT1Δ9. A method was developed to quantify the transcript of EAAT1 wt, EAAT1Δ3 and EAAT1Δ9 by means of real-time PCR. Samples were taken from different brain areas of a set of control and AD cases. The areas chosen for examination are affected differently in Alzheimer’s Disease, this was used an internal control for the experiments done in this project as to determine whether any effect observed is specific for AD, i.e. AD affected areas or is generally seen in all areas examined. The results of this project show that EAAT1Δ3 is transcribed in very low copy numbers making up a proportion of 0.15% of EAAT1 wt whereas EAAT1Δ9 is transcribed in a considerably large proportion of EAAT1 wt of 26.6%. It was moreover found that all EAAT1 variants are transcribed at significantly lower rates (P<0.0001) in AD cases, supporting the theory that EAAT1 protein expression is reduced to a point where glutamate uptake normally mediated by this transporter is impaired. This in turn is thought to result in toxic levels glutamate accounting for neuronal loss in the disease. No area-dependent effects were found, suggesting that the reduction of EAAT1 transcription is rather a result of an underlying general mechanism present in AD. Further research will have to be done to assess the degree of EAAT1 expression in AD and whether those future findings match with the result of this project.
Recent research has shown that the early lexical representations children establish in their second year of life already seem to be phonologically detailed enough to allow differentiation from very similar forms. In contrast to these findings children with specific language impairment show problems in discriminating phonologically similar word forms up to school age. In our study we investigated the question whether there would be differences in the processing of phonological details in normally developing and in children with low language performance in the second year of life. This was done by a retrospective study in which in the processing of phonological details was tested by a preferential looking experiment when the children were 19 months old. At the age of 30 months children were tested with a standardized German test of language comprehension and production (SETK2). The preferential looking data at 19 months revealed an opposite reaction pattern for the two groups: while the children scoring normally in the SETK2 increase their fixations of a pictured object only when it was named with the correct word, children with later low language performance did so only when presented with a phonologically slightly deviant mispronunciation. We suggest that this pattern does not point to a specific deficit in processing phonological information in these children but might be related to an instability of early phonological representations, and/or a generalized problem of information processing as compared to typically developing children.
Recent work has shown that English-learning 18-month-olds can detect the relationship between discontinuous morphemes such as is and -ing in Grandma is always running (Gomez, 2002; Santelmann & Jusczyk, 1998) but only at a maximum of 3 intervening syllables. In this article we examine the tracking of discontinuous dependencies in children acquiring German. Due to freer word order, German allows for greater distances between dependent elements and a greater syntactic variety of the intervening elements than English does. The aim of this study was to investigate whether factors other than distance may influence the child’s capacity to recognize discontinuous elements. Our findings provide evidence that children’s recognition capacities are affected not only by distance but also by their ability to linguistically analyze the material intervening between the dependent elements. We speculate that this result supports the existence of processing mechanisms that reduce a discontinuous relation to a local one based on subcategorization relations.
How do children determine the syntactic category of novel words? In this article we present the results of 2 experiments that investigated whether German children between 12 and 16 months of age can use distributional knowledge that determiners precede nouns and subject pronouns precede verbs to syntactically categorize adjacent novel words. Evidence from the head-turn preference paradigm shows that, although 12- to 13-month-olds cannot do this, 14- to 16-month-olds are able to use a determiner to categorize a following novel word as a noun. In contrast, no categorization effect was found for a novel word following a subject pronoun. To understand this difference we analyzed adult child-directed speech. This analysis showed that there are in fact stronger co-occurrence relations between determiners and nouns than between subject pronouns and verbs. Thus, in German determiners may be more reliable cues to the syntactic category of an adjacent novel word than are subject pronouns. We propose that the capacity to syntactically categorize novel words, demonstrated here for the first time in children this young, mediates between the recognition of the specific morphosyntactic frame in which a novel word appears and the word-to-world mapping that is needed to build up a semantic representation for the novel word.
The Arctic plays a key role in Earth’s climate system as global warming is predicted to be most pronounced at high latitudes and because one third of the global carbon pool is stored in ecosystems of the northern latitudes. In order to improve our understanding of the present and future carbon dynamics in climate sensitive permafrost ecosystems, the present study concentrates on investigations of microbial controls of methane fluxes, on the activity and structure of the involved microbial communities, and on their response to changing environmental conditions. For this purpose an integrated research strategy was applied, which connects trace gas flux measurements to soil ecological characterisation of permafrost habitats and molecular ecological analyses of microbial populations. Furthermore, methanogenic archaea isolated from Siberian permafrost have been used as potential keystone organisms for studying and assessing life under extreme living conditions. Long-term studies on methane fluxes were carried out since 1998. These studies revealed considerable seasonal and spatial variations of methane emissions for the different landscape units ranging from 0 to 362 mg m-2 d-1. For the overall balance of methane emissions from the entire delta, the first land cover classification based on Landsat images was performed and applied for an upscaling of the methane flux data sets. The regionally weighted mean daily methane emissions of the Lena Delta (10 mg m-2 d-1) are only one fifth of the values calculated for other Arctic tundra environments. The calculated annual methane emission of the Lena Delta amounts to about 0.03 Tg. The low methane emission rates obtained in this study are the result of the used remotely sensed high-resolution data basis, which provides a more realistic estimation of the real methane emissions on a regional scale. Soil temperature and near soil surface atmospheric turbulence were identified as the driving parameters of methane emissions. A flux model based on these variables explained variations of the methane budget corresponding to continuous processes of microbial methane production and oxidation, and gas diffusion through soil and plants reasonably well. The results show that the Lena Delta contributes significantly to the global methane balance because of its extensive wetland areas. The microbiological investigations showed that permafrost soils are colonized by high numbers of microorganisms. The total biomass is comparable to temperate soil ecosystems. Activities of methanogens and methanotrophs differed significantly in their rates and distribution patterns along both the vertical profiles and the different investigated soils. The methane production rates varied between 0.3 and 38.9 nmol h-1 g-1, while the methane oxidation ranged from 0.2 to 7.0 nmol h-1 g-1. Phylogenetic analyses of methanogenic communities revealed a distinct diversity of methanogens affiliated to Methanomicrobiaceae, Methanosarcinaceae and Methanosaetaceae, which partly form four specific permafrost clusters. The results demonstrate the close relationship between methane fluxes and the fundamental microbiological processes in permafrost soils. The microorganisms do not only survive in their extreme habitat but also can be metabolic active under in situ conditions. It was shown that a slight increase of the temperature can lead to a substantial increase in methanogenic activity within perennially frozen deposits. In case of degradation, this would lead to an extensive expansion of the methane deposits with their subsequent impacts on total methane budget. Further studies on the stress response of methanogenic archaea, especially Methanosarcina SMA-21, isolated from Siberian permafrost, revealed an unexpected resistance of the microorganisms against unfavourable living conditions. A better adaptation to environmental stress was observed at 4 °C compared to 28 °C. For the first time it could be demonstrated that methanogenic archaea from terrestrial permafrost even survived simulated Martian conditions. The results show that permafrost methanogens are more resistant than methanogens from non-permafrost environments under Mars-like climate conditions. Microorganisms comparable to methanogens from terrestrial permafrost can be seen as one of the most likely candidates for life on Mars due to their physiological potential and metabolic specificity.
Many methods have been proposed for the simulation of constrained mechanical systems. The most obvious of these have mild instabilities and drift problems. Consequently, stabilization techniques have been proposed A popular stabilization method is Baumgarte's technique, but the choice of parameters to make it robust has been unclear in practice. Some of the simulation methods that have been proposed and used in computations are reviewed here, from a stability point of view. This involves concepts of differential-algebraic equation (DAE) and ordinary differential equation (ODE) invariants. An explanation of the difficulties that may be encountered using Baumgarte's method is given, and a discussion of why a further quest for better parameter values for this method will always remain frustrating is presented. It is then shown how Baumgarte's method can be improved. An efficient stabilization technique is proposed, which may employ explicit ODE solvers in case of nonstiff or highly oscillatory problems and which relates to coordinate projection methods. Examples of a two-link planar robotic arm and a squeezing mechanism illustrate the effectiveness of this new stabilization method.
A Hamiltonian system in potential form (formula in the original abstract) subject to smooth constraints on q can be viewed as a Hamiltonian system on a manifold, but numerical computations must be performed in Rn. In this paper methods which reduce "Hamiltonian differential algebraic equations" to ODEs in Euclidean space are examined. The authors study the construction of canonical parameterizations or local charts as well as methods based on the construction of ODE systems in the space in which the constraint manifold is embedded which preserve the constraint manifold as an invariant manifold. In each case, a Hamiltonian system of ordinary differential equations is produced. The stability of the constraint invariants and the behavior of the original Hamiltonian along solutions are investigated both numerically and analytically.
Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice. Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible. The best methods thus obtained are related to methods of coordinate projection. We discuss them and make concrete algorithmic suggestions.
Being living systems unable to adjust their location to changing environmental conditions, plants display homeostatic networks that have evolved to maintain transition metal levels in a very narrow concentration range in order to avoid either deficiency or toxicity. Hence, plants possess a broad repertoire of mechanisms for the cellular uptake, compartmentation and efflux, as well as for the chelation of transition metal ions. A small number of plants are hypertolerant to one or a few specific transition metals. Some metal tolerant plants are also able to hyperaccumulate metal ions. The Brassicaceae family member Arabidopis halleri ssp. halleri (L.) O´KANE and AL´SHEHBAZ is a hyperaccumulator of zinc (Zn), and it is closely related to the non-hypertolerant and non-hyperaccumulating model plant Arabidopsis thaliana (L.) HEYNHOLD. The close relationship renders A. halleri a promising emerging model plant for the comparative investigation of the molecular mechanisms behind hypertolerance and hyperaccumulation. Among several potential candidate genes that are probably involved in mediating the zinc-hypertolerant and zinc-hyperaccumulating trait is AhHMA3. The AhHMA3 gene is highly similar to AtHMA3 (AGI number: At4g30120) in A. thaliana, and its encoded protein belongs to the P-type IB ATPase family of integral membrane transporter proteins that transport transition metals. In contrast to the low AtHMA3 transcript levels in A. thaliana, the gene was found to be constitutively highly expressed across different Zn treatments in A. halleri, especially in shoots. In this study, the cloning and characterisation of the HMA3 gene and its promoter from Arabidopsis halleri (L.) O´KANE and AL´SHEHBAZ and Arabidopsis thaliana (L.) HEYNHOLD is described. Heterologously expressed AhHMA3 mediated enhanced tolerance to Zn and to a much lesser degree to cadmium (Cd) but not to cobalt (Co) in metal-sensitive mutant strains of budding yeast. It is demonstrated that the genome of A. halleri contains at least four copies of AhHMA3, AhHMA3-1 to AhHMA3-4. A copy-specific real-time RT-PCR indicated that an AhHMA3-1 related gene copy is the source of the constitutively high transcript level in A. halleri and not a gene copy similar to AhHMA3-2 or AhHMA3-4. In accordance with the enhanced AtHMA3mRNA transcript level in A. thaliana roots, an AtHMA3 promoter-GUS gene construct mediated GUS activity predominantly in the vascular tissues of roots and not in shoots. However, the observed AhHMA3-1 and AhHMA3-2 promoter-mediated GUS activity in A. thaliana or A. halleri plants did not reflect the constitutively high expression of AhHMA3 in shoots of A. halleri. It is suggested that other factors e. g. characteristic sequence inserts within the first intron of AhHMA3-1 might enable a constitutively high expression. Moreover, the unknown promoter of the AhHMA3-3 gene copy could be the source of the constitutively high AhHMA3 transcript levels in A. halleri. In that case, the AhHMA3-3 sequence is predicted to be highly homologous to AhHMA3-1. The lack of solid localisation data for the AhHMA3 protein prevents a clear functional assignment. The provided data suggest several possible functions of the AhHMA3 protein: Like AtHMA2 and AtHMA4 it might be localised to the plasma membrane and could contribute to the efficient translocation of Zn from root to shoot and/or to the cell-to-cell distribution of Zn in the shoot. If localised to the vacuolar membrane, then a role in maintaining a low cytoplasmic zinc concentration by vacuolar zinc sequestration is possible. In addition, AhHMA3 might be involved in the delivery of zinc ions to trichomes and mesophyll leaf cells that are major zinc storage sites in A. halleri.
During the last few years there was a tremendous growth of scientific activities in the fields related to both Physics and Control theory: nonlinear dynamics, micro- and nanotechnologies, self-organization and complexity, etc. New horizons were opened and new exciting applications emerged. Experts with different backgrounds starting to work together need more opportunities for information exchange to improve mutual understanding and cooperation. The Conference "Physics and Control 2007" is the third international conference focusing on the borderland between Physics and Control with emphasis on both theory and applications. With its 2007 address at Potsdam, Germany, the conference is located for the first time outside of Russia. The major goal of the Conference is to bring together researchers from different scientific communities and to gain some general and unified perspectives in the studies of controlled systems in physics, engineering, chemistry, biology and other natural sciences. We hope that the Conference helps experts in control theory to get acquainted with new interesting problems, and helps experts in physics and related fields to know more about ideas and tools from the modern control theory.
The aim of this work was the generation of carbon materials with high surface area, exhibiting a hierarchical pore system in the macro- and mesorange. Such a pore system facilitates the transport through the material and enhances the interaction with the carbon matrix (macropores are pores with diameters > 50 nm, mesopores between 2 – 50 nm). Thereto, new strategies for the synthesis of novel carbon materials with designed porosity were developed that are in particular useful for the storage of energy. Besides the porosity, it is the graphene structure itself that determines the properties of a carbon material. Non-graphitic carbon materials usually exhibit a quite large degree of disorder with many defects in the graphene structure, and thus exhibit inherent microporosity (d < 2nm). These pores are traps and oppose reversible interaction with the carbon matrix. Furthermore they reduce the stability and conductivity of the carbon material, which was undesired for the proposed applications. As one part of this work, the graphene structures of different non-graphitic carbon materials were studied in detail using a novel wide-angle x-ray scattering model that allowed precise information about the nature of the carbon building units (graphene stacks). Different carbon precursors were evaluated regarding their potential use for the synthesis shown in this work, whereas mesophase pitch proved to be advantageous when a less disordered carbon microstructure is desired. By using mesophase pitch as carbon precursor, two templating strategies were developed using the nanocasting approach. The synthesized (monolithic) materials combined for the first time the advantages of a hierarchical interconnected pore system in the macro- and mesorange with the advantages of mesophase pitch as carbon precursor. In the first case, hierarchical macro- / mesoporous carbon monoliths were synthesized by replication of hard (silica) templates. Thus, a suitable synthesis procedure was developed that allowed the infiltration of the template with the hardly soluble carbon precursor. In the second case, hierarchical macro- / mesoporous carbon materials were synthesized by a novel soft-templating technique, taking advantage of the phase separation (spinodal decomposition) between mesophase pitch and polystyrene. The synthesis also allowed the generation of monolithic samples and incorporation of functional nanoparticles into the material. The synthesized materials showed excellent properties as an anode material in lithium batteries and support material for supercapacitors.
A numerical bifurcation analysis of the electrically driven plane sheet pinch is presented. The electrical conductivity varies across the sheet such as to allow instability of the quiescent basic state at some critical Hartmann number. The most unstable perturbation is the two-dimensional tearing mode. Restricting the whole problem to two spatial dimensions, this mode is followed up to a time-asymptotic steady state, which proves to be sensitive to three-dimensional perturbations even close to the point where the primary instability sets in. A comprehensive three-dimensional stability analysis of the two-dimensional steady tearing-mode state is performed by varying parameters of the sheet pinch. The instability with respect to three-dimensional perturbations is suppressed by a sufficiently strong magnetic field in the invariant direction of the equilibrium. For a special choice of the system parameters, the unstably perturbed state is followed up in its nonlinear evolution and is found to approach a three-dimensional steady state.
We investigate numerically the appearance of heteroclinic behavior in a three-dimensional, buoyancy-driven fluid layer with stress-free top and bottom boundaries, a square horizontal periodicity with a small aspect ratio, and rotation at low to moderate rates about a vertical axis. The Prandtl number is 6.8. If the rotation is not too slow, the skewed-varicose instability leads from stationary rolls to a stationary mixed-mode solution, which in turn loses stability to a heteroclinic cycle formed by unstable roll states and connections between them. The unstable eigenvectors of these roll states are also of the skewed-varicose or mixed-mode type and in some parameter regions skewed-varicose like shearing oscillations as well as square patterns are involved in the cycle. Always present weak noise leads to irregular horizontal translations of the convection pattern and makes the dynamics chaotic, which is verified by calculating Lyapunov exponents. In the nonrotating case, the primary rolls lose, depending on the aspect ratio, stability to traveling waves or a stationary square pattern. We also study the symmetries of the solutions at the intermittent fixed points in the heteroclinic cycle.
Our dynamic Sun manifests its activity by different phenomena: from the 11-year cyclic sunspot pattern to the unpredictable and violent explosions in the case of solar flares. During flares, a huge amount of the stored magnetic energy is suddenly released and a substantial part of this energy is carried by the energetic electrons, considered to be the source of the nonthermal radio and X-ray radiation. One of the most important and still open question in solar physics is how the electrons are accelerated up to high energies within (the observed in the radio emission) short time scales. Because the acceleration site is extremely small in spatial extent as well (compared to the solar radius), the electron acceleration is regarded as a local process. The search for localized wave structures in the solar corona that are able to accelerate electrons together with the theoretical and numerical description of the conditions and requirements for this process, is the aim of the dissertation. Two models of electron acceleration in the solar corona are proposed in the dissertation: I. Electron acceleration due to the solar jet interaction with the background coronal plasma (the jet--plasma interaction) A jet is formed when the newly reconnected and highly curved magnetic field lines are relaxed by shooting plasma away from the reconnection site. Such jets, as observed in soft X-rays with the Yohkoh satellite, are spatially and temporally associated with beams of nonthermal electrons (in terms of the so-called type III metric radio bursts) propagating through the corona. A model that attempts to give an explanation for such observational facts is developed here. Initially, the interaction of such jets with the background plasma leads to an (ion-acoustic) instability associated with growing of electrostatic fluctuations in time for certain range of the jet initial velocity. During this process, any test electron that happen to feel this electrostatic wave field is drawn to co-move with the wave, gaining energy from it. When the jet speed has a value greater or lower than the one, required by the instability range, such wave excitation cannot be sustained and the process of electron energization (acceleration and/or heating) ceases. Hence, the electrons can propagate further in the corona and be detected as type III radio burst, for example. II. Electron acceleration due to attached whistler waves in the upstream region of coronal shocks (the electron--whistler--shock interaction) Coronal shocks are also able to accelerate electrons, as observed by the so-called type II metric radio bursts (the radio signature of a shock wave in the corona). From in-situ observations in space, e.g., at shocks related to co-rotating interaction regions, it is known that nonthermal electrons are produced preferably at shocks with attached whistler wave packets in their upstream regions. Motivated by these observations and assuming that the physical processes at shocks are the same in the corona as in the interplanetary medium, a new model of electron acceleration at coronal shocks is presented in the dissertation, where the electrons are accelerated by their interaction with such whistlers. The protons inflowing toward the shock are reflected there by nearly conserving their magnetic moment, so that they get a substantial velocity gain in the case of a quasi-perpendicular shock geometry, i.e, the angle between the shock normal and the upstream magnetic field is in the range 50--80 degrees. The so-accelerated protons are able to excite whistler waves in a certain frequency range in the upstream region. When these whistlers (comprising the localized wave structure in this case) are formed, only the incoming electrons are now able to interact resonantly with them. But only a part of these electrons fulfill the the electron--whistler wave resonance condition. Due to such resonant interaction (i.e., of these electrons with the whistlers), the electrons are accelerated in the electric and magnetic wave field within just several whistler periods. While gaining energy from the whistler wave field, the electrons reach the shock front and, subsequently, a major part of them are reflected back into the upstream region, since the shock accompanied with a jump of the magnetic field acts as a magnetic mirror. Co-moving with the whistlers now, the reflected electrons are out of resonance and hence can propagate undisturbed into the far upstream region, where they are detected in terms of type II metric radio bursts. In summary, the kinetic energy of protons is transfered into electrons by the action of localized wave structures in both cases, i.e., at jets outflowing from the magnetic reconnection site and at shock waves in the corona.
In this thesis we mainly generalize two theorems from Mackaay-Picken and Picken (2002, 2004). In the first paper, Mackaay and Picken show that there is a bijective correspondence between Deligne 2-classes $\xi \in \check{H}^2(M,\mathcal{D}^2)$ and holonomy maps from the second thin-homotopy group $\pi_2^2(M)$ to $U(1)$. In the second one, a generalization of this theorem to manifolds with boundaries is given: Picken shows that there is a bijection between Deligne 2-cocycles and a certain variant of 2-dimensional topological quantum field theories. In this thesis we show that these two theorems hold in every dimension. We consider first the holonomy case, and by using simplicial methods we can prove that the group of smooth Deligne $d$-classes is isomorphic to the group of smooth holonomy maps from the $d^{th}$ thin-homotopy group $\pi_d^d(M)$ to $U(1)$, if $M$ is $(d-1)$-connected. We contrast this with a result of Gajer (1999). Gajer showed that Deligne $d$-classes can be reconstructed by a different class of holonomy maps, which not only include holonomies along spheres, but also along general $d$-manifolds in $M$. This approach does not require the manifold $M$ to be $(d-1)$-connected. We show that in the case of flat Deligne $d$-classes, our result differs from Gajers, if $M$ is not $(d-1)$-connected, but only $(d-2)$-connected. Stiefel manifolds do have this property, and if one applies our theorem to these and compare the result with that of Gajers theorem, it is revealed that our theorem reconstructs too many Deligne classes. This means, that our reconstruction theorem cannot live without the extra assumption on the manifold $M$, that is our reconstruction needs less informations about the holonomy of $d$-manifolds in $M$ at the price of assuming $M$ to be $(d-1)$-connected. We continue to show, that also the second theorem can be generalized: By introducing the concept of Picken-type topological quantum field theory in arbitrary dimensions, we can show that every Deligne $d$-cocycle induces such a $d$-dimensional field theory with two special properties, namely thin-invariance and smoothness. We show that any $d$-dimensional topological quantum field theory with these two properties gives rise to a Deligne $d$-cocycle and verify that this construction is surjective and injective, that is both groups are isomorphic.
The Voyager 2 Photopolarimeter experiment has yielded the highest resolved data of Saturn's rings, exhibiting a wide variety of features. The B-ring region between 105000 km and 110000 km distance from Saturn has been investigated. It has a high matter density and contains no significance features visible by eye. Analysis with statistical methods has let us to the detection of two significant events. These features are correlated with the inner 3:2 resonances of the F-ring shepherd satellites Pandora and Prometheus, and may be evidence of large ring paricles caught in the corotation resonances.
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence.
In the modern industrialized countries every year several hundred thousands of people die due to the sudden cardiac death. The individual risk for this sudden cardiac death cannot be defined precisely by common available, non-invasive diagnostic tools like Holter-monitoring, highly amplified ECG and traditional linear analysis of heart rate variability (HRV). Therefore, we apply some rather unconventional methods of nonlinear dynamics to analyse the HRV. Especially, some complexity measures that are basing on symbolic dynamics as well as a new measure, the renormalized entropy, detect some abnormalities in the HRV of several patients who have been classified in the low risk group by traditional methods. A combination of these complexity measures with the parameters in the frequency domain seems to be a promising way to get a more precise definition of the individual risk. These findings have to be validated by a representative number of patients.
We have used techniques of nonlinear dynamics to compare a special model for the reversals of the Earth's magnetic field with the observational data. Although this model is rather simple, there is no essential difference to the data by means of well-known characteristics, such as correlation function and probability distribution. Applying methods of symbolic dynamics we have found that the considered model is not able to describe the dynamical properties of the observed process. These significant differences are expressed by algorithmic complexity and Renyi information.
An approach to the development of fluorescent probes to follow polymerizations in situ using fluorinated cross-conjugated enediynes (Y-enynes) is reported. Different substitution patterns in the Y-enynes result in distinct solvatochromic behavior. β,β-Bis(phenylethynyl)pentafluorostyrene 7, which bears no donor substituents and only fluorine at the styrene moiety, shows no solvatochromism. Donor substituted β,β-bis(3,4,5-trimethoxyphenylethynyl) pentafluorostyrene 8 and β,β-bis(4-butyl-2,3,5,6-tetrafluorophenylethynyl)-3,4,5-trimethoxystyrene 9 exhibit solvatochromism upon change of solvent polarity. Y-enyne 8 showed the largest solvatochromic shift (94 nm bathochromic shift) upon changing solvent from cyclohexane to acetonitrile. A smaller solvatochromic response (44 nm bathochromic shift) was observed for 9. Lippert–Mataga treatment of 8 and 9 yields slopes of -10,800 and -6,400 cm -1, respectively. This corresponds to a change in dipole moment of 9.6 and 6.9 D, respectively. The solvatochromic behavior in 8 and 9 supports the formation of an intramolecular charge transfer (ICT) state. The low fluorescence quantum yields are caused by competitive double bond rotation. The fluorescence decay time of 9 decreases in methyltetrahydrofuran from 2.1 ns at 77 K to 0.11 ns at 200 K. Efficient single bond rotation in 9 was frozen at -50 °C in a configuration in which the trimethoxyphenyl ring is perpendicular to the fluorinated rings. 7–9 are photostable compounds. The X-ray structure of 7 shows it is not planar and that its conjugation is distorted. Y-enyne 7 stacks in the solid state showing coulombic, actetylene–arene, and fluorine–π interactions.
Investigations with frequency domain photon density waves allow elucidation of absorption and scattering properties of turbid media. The temporal and spatial propagation of intensity modulated light with frequencies up to more than 1 GHz can be described by the P1 approximation to the Boltzmann transport equation. In this study, we establish requirements for the appropriate choice of turbid model media and characterize mixtures of isosulfan blue as absorber and polystyrene beads as scatterer. For these model media, the independent determination of absorption and reduced scattering coefficients over large absorber and scatterer concentration ranges is demonstrated with a frequency domain photon density wave spectrometer employing intensity and phase measurements at various modulation frequencies.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
The innovation of information techniques has changed many aspects of our life. In health care field, we can obtain, manage and communicate high-quality large volumetric image data by computer integrated devices, to support medical care. In this dissertation I propose several promising methods that could assist physicians in processing, observing and communicating the image data. They are included in my three research aspects: telemedicine integration, medical image visualization and image segmentation. And these methods are also demonstrated by the demo software that I developed. One of my research point focuses on medical information storage standard in telemedicine, for example DICOM, which is the predominant standard for the storage and communication of medical images. I propose a novel 3D image data storage method, which was lacking in current DICOM standard. I also created a mechanism to make use of the non-standard or private DICOM files. In this thesis I present several rendering techniques on medical image visualization to offer different display manners, both 2D and 3D, for example, cut through data volume in arbitrary degree, rendering the surface shell of the data, and rendering the semi-transparent volume of the data. A hybrid segmentation approach, designed for semi-automated segmentation of radiological image, such as CT, MRI, etc, is proposed in this thesis to get the organ or interested area from the image. This approach takes advantage of the region-based method and boundary-based methods. Three steps compose the hybrid approach: the first step gets coarse segmentation by fuzzy affinity and generates homogeneity operator; the second step divides the image by Voronoi Diagram and reclassifies the regions by the operator to refine segmentation from the previous step; the third step handles vague boundary by level set model. Topics for future research are mentioned in the end, including new supplement for DICOM standard for segmentation information storage, visualization of multimodal image information, and improvement of the segmentation approach to higher dimension.
First studies of electron transfer in [N]phenylenes were performed in bimolecular quenching reactions of angular [3]- and triangular [4]phenylene with various electron acceptors. The relation between the quenching rate constants kq and the free energy change of the electron transfer (ΔG0CS ) could be described by the Rehm-Weller equation. From the experimental results, a reorganization energy λ of 0.7 eV was derived. Intramolecular electron transfer reactions were studied in an [N]phenylene bichomophore and a corresponding reference compound. Fluorescence lifetime and quantum yield of the bichromophor display a characteristic dependence on the solvent polarity, whereas the corresponding values of the reference compound remain constant. From the results, a nearly isoenergonic ΔG0CS can be determined. As the triplet quantum yield is nearly independent of the polarity, charge recombination leads to the population of the triplet state.
Contents: Chapter 1. Introduction 1 Information Structure 2 Grammatical Correlates of Information Structure 3 Structure of the Questionnaire 4 Experimental Tasks 5 Technicalities 6 Archiving 7 Acknowledgments Chapter 2. General Questions 1 General Information 2 Phonology 3 Morphology and Syntax Chapter 3. Experimental tasks 1 Changes (Given/New in Intransitives and Transitives) 2 Giving (Given/New in Ditransitives) 3 Visibility (Given/New, Animacy and Type/Token Reference) 4 Locations (Given/New in Locative Expressions) 5 Sequences (Given/New/Contrast in Transitives) 6 Dynamic Localization (Given/New in Dynamic Loc. Descriptions) 7 Birthday Party (Weight and Discourse Status) 8 Static Localization (Macro-Planning and Given/New in Locatives) 9 Guiding (Presentational Utterances) 10 Event Cards (All New) 11 Anima (Focus types and Animacy) 12 Contrast (Contrast in pairing events) 13 Animal Game (Broad/Narrow Focus in NP) 14 Properties (Focus on Property and Possessor) 15 Eventives (Thetic and Categorical Utterances) 16 Tell a Story (Contrast in Text) 17 Focus Cards (Selective, Restrictive, Additive, Rejective Focus) 18 Who does What (Answers to Multiple Constituent Questions) 19 Fairy Tale (Topic and Focus in Coherent Discourse) 20 Map Task (Contrastive and Selective Focus in Spontaneous Dialogue) 21 Drama (Contrastive Focus in Argumentation) 22 Events in Places (Spatial, Temporal and Complex Topics) 23 Path Descriptions (Topic Change in Narrative) 24 Groups (Partial Topic) 25 Connections (Bridging Topic) 26 Indirect (Implicational Topic) 27 Surprises (Subject-Topic Interrelation) 28 Doing (Action Given, Action Topic) 29 Influences (Question Priming) Chapter 4. Translation tasks 1 Basic Intonational Properties 2 Focus Translation 3 Topic Translation 4 Quantifiers Chapter 5. Information structure summary survey 1 Preliminaries 2 Syntax 3 Morphology 4 Prosody 5 Summary: Information structure Chapter 6. Performance of Experimental Tasks in the Field 1 Field sessions 2 Field Session Metadata 3 Informants’ Agreement
The end of the cold war division of the Baltic Sea in 1989, and the three Baltic states’ return to independence in 1991 created new opportunities for the decision-makers of the area, as well as new possibilities for fashioning security in the region. This article will examine the security debate affecting the Baltic Sea region in the post-cold war period, and in particular, the relevance of the European Union to that debate. The following section will examine various concepts of security relevant to the Baltic region; the third section looks at the EU and the Baltic area; and the last part deals with the implications that EU membership by the Baltic Sea states may have for the security of the Baltic Sea zone.
The article mobilises the concept of strategic culture in order to identify the impact of history upon contemporary security policy. The article will first look at the "wholesale construction" of a strategic culture after the Second World War in West Germany before exploring its impact upon security policy since the end of the Cold War in two areas: the Bundeswehr's out-of-area role and conscription. The central argument presented here is that the strategic culture of the former Federal Republic now writ large on to the new united Germany sets the context within which security policies are designed. This strategic culture, as will be argued, acts as both a facilitating and a restraining variable on behaviour, making certain policy options possible and others impossible.
Contents: 1. Introduction 2. Migration and Assimilation – Theoretical Approaches 2.1 Meaning and Definition of the Terms Migration and Migrant 2.2 Milton M. Gordon – Sub Processes of Assimilation 2.3 Hartmut Esser - Acculturation, Integration, and Assimilation 2.4 The Concept of Integration and Assimilation 2.5 Straight–line Assimilation and its Implications 2.6 Segmented Assimilation and its Implications 3. Social Inequality and Welfare – Theoretical Approaches 3.1 Dimensions of Inequality 3.2 Welfare Regimes and Social Inequality 3.3 Migration, Assimilation and Inequality 4. Research Design 4.1 Research Question and General Proceeding 4.2 Sample and Data Base 4.3 Operationalisation and Indicators 5. Migration, Welfare and Inequality in Three European Countries 6. Empirical Results 6.1 Performance of Migrants Compared With Natives 6.2 Different Trajectories of Assimilation 6.3 Trajectories of Segmented Assimilation and their Determinants 6.4 Policies, Attitudes and Assimilation – An Aggregate Analysis 6.5 Summary – What Determines the Performance of Migrants? 7. Discussion of Empirical Results in Terms of Theoretical Approaches 7.1 The Situation of Migrants in Three European Countries 7.2 Assessment of the Trajectories of Assimilation 8. Conclusion – Future Prospects of Migration in Europe
Observers of international politics have been conscious of the growing international involvement of non-central governments (NCGs), particularly in federal systems. These have been supplemented by the internationalisation of subnational actors in quasi-federal and even unitary states. One of the difficulties is that analysis has often been locked into the dominant paradigm debate in International Relations concerning who and who are not significant actors. Having briefly explored the nature of this changing environment, marked by a growing emphasis on access rather than control as a policy objective and the emergence of what is termed a 'catalytic diplomacy', the discussion focuses on the need for linkage between the levels of government in the pursuit of international as well as domestic policy goals. The nature of linkage mechanisms are discussed.
One type of internal diachronic change that has been extensively studied for spoken languages is grammaticalization whereby lexical elements develop into free or bound grammatical elements. Based on a wealth of spoken languages, a large amount of prototypical grammaticalization pathways has been identified. Moreover, it has been shown that desemanticization, decategorialization, and phonetic erosion are typical characteristics of grammaticalization processes. Not surprisingly, grammaticalization is also responsible for diachronic change in sign languages. Drawing data from a fair number of sign languages, we show that grammaticalization in visual-gestural languages – as far as the development from lexical to grammatical element is concerned – follows the same developmental pathways as in spoken languages. That is, the proposed pathways are modalityindependent. Besides these intriguing parallels, however, sign languages have the possibility of developing grammatical markers from manual and non-manual co-speech gestures. We will discuss various instances of grammaticalized gestures and we will also briefly address the issue of the modality-specificity of this phenomenon.
On the basis of the Dynamic Syntax framework, this paper argues that the production pressures in dialogue determining alignment effects and given versus new informational effects also drive the shift from case-rich free word order systems without clitic pronouns into systems with clitic pronouns with rigid relative ordering. The paper introduces assumptions of Dynamic Syntax, in particular the building up of interpretation through structural underspecification and update, sketches the attendant account of production with close coordination of parsing and production strategies, and shows how what was at the Latin stage a purely pragmatic, production-driven decision about linear ordering becomes encoded in the clitics in theMedieval Spanish system which then through successive steps of routinization yield the modern systems with immediately pre-verbal fixed clitic templates.
We analyze anaphoric phenomena in the context of building an input understanding component for a conversational system for tutoring mathematics. In this paper, we report the results of data analysis of two sets of corpora of dialogs on mathematical theorem proving. We exemplify anaphoric phenomena, identify factors relevant to anaphora resolution in our domain and extensions to the input interpretation component to support it.
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
Goal-oriented dialog as a collaborative subordinated activity involving collective acceptance
(2006)
Modeling dialog as a collaborative activity consists notably in specifying the contain of the Conversational Common Ground and the kind of social mental state involved. In previous work (Saget, 2006), we claim that Collective Acceptance is the proper social attitude for modeling Conversational Common Ground in the particular case of goal-oriented dialog. We provide a formalization of Collective Acceptance, besides elements in order to integrate this attitude in a rational model of dialog are provided; and finally, a model of referential acts as being part of a collaborative activity is provided. The particular case of reference has been chosen in order to exemplify our claims.
A key problem for models of dialogue is to explain the mechanisms involved in generating and responding to clarification requests. We report a 'Maze task' experiment that investigates the effect of 'spoof' clarification requests on the development of semantic co-ordination. The results provide evidence of both local and global semantic co-ordination phenomena that are not captured by existing dialogue co-ordination models.
How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We ran a series of experiments on multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. The lexicon diverges at a faster rate for an introspective population, eventually collapsing to one single form which is associated with all meanings. This contrasts sharply with MCI capable populations in which a lexicon is maintained, where every meaning is associated with a unique word. We also investigated the effect of increasing the meaning space and showed that it speeds up the lexicon divergence for all populations irrespective of their acquisition method.
Demonstratives, in particular gestures that "only" accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VRapplications increases the need for theories ofmultimodal structures and events. In our workshopcontribution we focus on the integration of multimodal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.