Refine
Has Fulltext
- yes (276)
Year of publication
Document Type
- Conference Proceeding (118)
- Article (78)
- Postprint (52)
- Review (15)
- Working Paper (6)
- Doctoral Thesis (4)
- Monograph/Edited Volume (3)
Language
- English (276) (remove)
Is part of the Bibliography
- no (276) (remove)
Keywords
- USA (7)
- United States (7)
- moderne jüdische Geschichte (6)
- modern Jewish history (5)
- 20. Jahrhundert (4)
- 20th century (4)
- 19. Jahrhundert (3)
- Diversity (3)
- 19th century (2)
- Fluoreszenz-Resonanz-Energie-Transfer (2)
Institute
- Extern (276) (remove)
We report on new mass-loss rate estimates for O stars in six massive binaries using the amplitude of orbital-phase dependent, linear-polarimetric variability caused by electron scattering off free electrons in the winds. Our estimated mass-loss rates for luminous O stars are independent of clumping. They suggest similar clumping corrections as for WR stars and do not support the recently proposed reduction in mass-loss rates of O stars by one or two orders of magnitude.
Clumping in O-star winds
(2007)
We have analyzed the spectra of seven Galactic O4 supergiants, with the NLTE wind code CMFGEN. For all stars, we have found that clumped wind models match well lines from different species spanning a wavelength range from FUV to optical, and remain consistent with Hα data. We have achieved an excellent match of the P V λλ1118, 1128 resonance doublet and N IV λ1718, as well as He II λ4686 suggesting that our physical description of clumping is adequate. We find very small volume filling factors and that clumping starts deep in the wind, near the sonic point. The most crucial consequence of our analysis is that the mass loss rates of O stars need to be revised downward significantly, by a factor of 3 and more compared to those obtained from smooth-wind models.
I discuss observational evidence – independent of the direct spectral diagnostics of stellar winds themselves – suggesting that mass-loss rates for O stars need to be revised downward by roughly a factor of three or more, in line with recent observed mass-loss rates for clumped winds. These independent constraints include the large observed mass-loss rates in LBV eruptions, the large masses of evolved massive stars like LBVs and WNH stars, WR stars in lower metallicity environments, observed rotation rates of massive stars at different metallicity, supernovae that seem to defy expectations of high mass-loss rates in stellar evolution, and other clues. I pay particular attention to the role of feedback that would result from higher mass-loss rates, driving the star to the Eddington limit too soon, and therefore making higher rates appear highly implausible. Some of these arguments by themselves may have more than one interpretation, but together they paint a consistent picture that steady line-driven winds of O-type stars have lower mass-loss rates and are significantly clumped.
The P v λλ1118, 1128 resonance doublet is an extraordinarily useful diagnostic of O-star winds, because it bypasses the traditional problems associated with determining mass-loss rates from UV resonance lines. We discuss critically the assumptions and uncertainties involved with using P v to diagnose mass-loss rates, and conclude that the large discrepancies between massloss rates determined from P v and the rates determined from “density squared” emission processes pose a significant challenge to the “standard model” of hot-star winds. The disparate measurements can be reconciled if the winds of O-type stars are strongly clumped on small spatial scales, which in turn implies that mass-loss rates based on Hα or radio emission are too large by up to an order of magnitude.
Significant seasonal variation in size at settlement has been observed in newly settled larvae of Dreissena polymorpha in Lake Constance. Diet quality, which varies temporally and spatially in freshwater habitats, has been suggested as a significant factor influencing life history and development of freshwater invertebrates. Accordingly, experiments were conducted with field-collected larvae to test the hypothesis that diet quality can determine planktonic larval growth rates, size at settlement and subsequent post-metamorphic growth rates. Larvae were fed one of two diets or starved. One diet was composed of cyanobacterial cells which are deficient in polyunsaturated fatty acids (PUFAs), and the other was a mixed diet rich in PUFAs. Freshly metamorphosed animals from the starvation treatment had a carbon content per individual 70% lower than that of larvae fed the mixed diet. This apparent exhaustion of larval internal reserves resulted in a 50% reduction of the postmetamorphic growth rates. Growth was also reduced in animals previously fed the cyanobacterial diet. Hence, low food quantity or low food quality during the larval stage of D. polymorpha lead to irreversible effects for postmetamorphic animals, and is related to inferior competitive abilities.
In the old days (pre ∼1990) hot stellar winds were assumed to be smooth, which made life fairly easy and bothered no one. Then after suspicious behaviour had been revealed, e.g. stochastic temporal variability in broadband polarimetry of single hot stars, it took the emerging CCD technology developed in the preceding decades (∼1970-80’s) to reveal that these winds were far from smooth. It was mainly high-S/N, time-dependent spectroscopy of strong optical recombination emission lines in WR, and also a few OB and other stars with strong hot winds, that indicated all hot stellar winds likely to be pervaded by thousands of multiscale (compressible supersonic turbulent?) structures, whose driver is probably some kind of radiative instability. Quantitative estimates of clumping-independent mass-loss rates came from various fronts, mainly dependent directly on density (e.g. electron-scattering wings of emission lines, UV spectroscopy of weak resonance lines, and binary-star properties including orbital-period changes, electron-scattering, and X-ray fluxes from colliding winds) rather than the more common, easier-to-obtain but clumping-dependent density-squared diagnostics (e.g. free-free emission in the IR/radio and recombination lines, of which the favourite has always been Hα). Many big questions still remain, such as: What do the clumps really look like? Do clumping properties change as one recedes from the mother star? Is clumping universal? Does the relative clumping correction depend on $\dot{M}$ itself?
Mass loss is a very important aspect of the life of massive stars. After briefly reviewing its importance, we discuss the impact of the recently proposed downward revision of mass loss rates due to clumping (difficulty to form Wolf-Rayet stars and production of critically rotating stars). Although a small reduction might be allowed, large reduction factors around ten are disfavoured. We then discuss the possibility of significant mass loss at very low metallicity due to stars reaching break-up velocities and especially due to the metal enrichment of the surface of the star via rotational and convective mixing. This significant mass loss may help the first very massive stars avoid the fate of pair-creation supernova, the chemical signature of which is not observed in extremely metal poor stars. The chemical composition of the very low metallicity winds is very similar to that of the most metal poor star known to date, HE1327-2326 and offer an interesting explanation for the origin of the metals in this star. We also discuss the importance of mass loss in the context of long and soft gamma-ray bursts and pair-creation supernovae. Finally, we would like to stress that mass loss in cooler parts of the HR-diagram (luminous blue variable and yellow and red supergiant stages) are much more uncertain than in the hot part. More work needs to be done in these areas to better constrain the evolution of the most massive stars.
The factors that determine the efficiency of energy transfer in aquatic food webs have been investigated for many decades. The plant-animal interface is the most variable and least predictable of all levels in the food web. In order to study determinants of food quality in a large lake and to test the recently proposed central importance of the long-chained eicosapentaenoic acid (EPA) at the pelagic producer-grazer interface, we tested the importance of polyunsaturated fatty acids (PUFAs) at the pelagic producerconsumer interface by correlating sestonic food parameters with somatic growth rates of a clone of Daphnia galeata. Daphnia growth rates were obtained from standardized laboratory experiments spanning one season with Daphnia feeding on natural seston from Lake Constance, a large pre-alpine lake. Somatic growth rates were fitted to sestonic parameters by using a saturation function. A moderate amount of variation was explained when the model included the elemental parameters carbon (r2 = 0.6) and nitrogen (r2 = 0.71). A tighter fit was obtained when sestonic phosphorus was incorporated (r2 = 0.86). The nonlinear regression with EPA was relatively weak (r2 = 0.77), whereas the highest degree of variance was explained by three C18-PUFAs. The best (r2 = 0.95), and only significant, correlation of Daphnia's growth was found with the C18-PUFA α-linolenic acid (α-LA; C18:3n-3). This correlation was weakest in late August when C:P values increased to 300, suggesting that mineral and PUFA-limitation of Daphnia's growth changed seasonally. Sestonic phosphorus and some PUFAs showed not only tight correlations with growth, but also with sestonic α-LA content. We computed Monte Carlo simulations to test whether the observed effects of α-LA on growth could be accounted for by EPA, phosphorus, or one of the two C18-PUFAs, stearidonic acid (C18:4n-3) and linoleic acid (C18:2n-6). With >99 % probability, the correlation of growth with α-LA could not be explained by any of these parameters. In order to test for EPA limitation of Daphnia's growth, in parallel with experiments on pure seston, growth was determined on seston supplemented with chemostat-grown, P-limited Stephanodiscus hantzschii, which is rich in EPA. Although supplementation increased the EPA content 80-800x, no significant changes in the nonlinear regression of the growth rates with α-LA were found, indicating that growth of Daphnia on pure seston was not EPA limited. This indicates that the two fatty acids, EPA and α-LA, were not mutually substitutable biochemical resources and points to different physiological functions of these two PUFAs. These results support the PUFA-limitation hypothesis for sestonic C:P < 300 but are contrary to the hypothesis of a general importance of EPA, since no evidence for EPA limitation was found. It is suggested that the resource ratios of EPA and α-LA rather than the absolute concentrations determine which of the two resources is limiting growth.
4-Phenylphenoxazinones were isolated after biomimetic oxidation, using diphenoloxidases of insect cuticle, mushroom tyrosinase, or after autoxidation of N-acetyldopamine (Image ) in the presence of β-alanine, β-alanine methyl ester or N-acetyl-L-lysine. They are formed presumably by addition of 2-aminoalkyl-5-alkylphenols to the o-quinone of biphenyltetrol which, in turn, arises from oxidative coupling of. The structures of present the first examples for the assembly of reasonably stable intermediates in the rather complex process of chemical modifications of aliphatic amino acid residues by o-quinones.
Amphiphilic derivatives of octadiene and docosadiene were investigated in monolayers and Langmuir-Blodgett multilayers, with respect to their self-organization and their polymerization behavior. All amphiphiles investigated form monolayers. However, only acid and alcohol derivatives were able to build up multilayers. Those multilayers are rapidly photopolymerized in the layers via a two-step process: Irradiation with long-wavelength UV light yields soluble polymers, whereas additional irradiation with sfiort-wavelength UV light produces insoluble and presumably cross-linked polymers. The reaction meclianism is discussed according to the polymer characterization by UV spectroscopy, small-angle X-ray scattering, NMR spectroscopy, and gel permeation chromatography. All multilayers undergo structural changes during the polymerization; substantial changes result in defects in the polymerized layers as observed by scanning electron microscopy. In contrast to the acids and alcohols, the deposition of monolayers of the aldehyde derivatives did not yield well-ordered multilayers, but rather amorphous films. In this different film structure, the photopolymerization process differs from the one observed in multilayers.
The topography of first-order catchments in a region of western Amazonia was found to exhibit distinctive, recurrent features: a steep, straight lower side slope, a flat or nearly flat terrace at an intermediate elevation between valley floor and interfluve, and an upper side slope connecting interfluve and intermediate terrace. A detailed survey of soil-saturated hydraulic conductivity (K sat)-depth relationships, involving 740 undisturbed soil cores, was conducted in a 0.75-ha first-order catchment. The sampling approach was stratified with respect to the above slope units. Exploratory data analysis suggested fourth-root transformation of batches from the 0–0.1 m depth interval, log transformation of batches from the subsequent 0.1 m depth increments, and the use of robust estimators of location and scale. The K sat of the steep lower side slope decreased from 46 to 0.1 mm/h over the overall sampling depth of 0.4 m. The corresponding decrease was from 46 to 0.1 mm/h on the intermediate terrace, from 335 to 0.01 mm/h on the upper side slope, and from 550 to 0.015 mm/h on the interfluve. A depthwise comparison of these slope units led to the formulation of several hypotheses concerning the link between K sat and topography.
Rainfall erosivities as defined by the R factor from the universal soil loss equation were determined for all events during a two-year period at the station La Cuenca in western Amazonia. Three methods based on a power relationship between rainfall amount and erosivity were then applied to estimate event and daily rainfall erosivities from the respective rainfall amounts. A test of the resulting regression equations against an independent data set proved all three methods equally adequate in predicting rainfall erosivity from daily rainfall amount. We recommend the Richardson model for testing in the Amazon Basin, and its use with the coefficient from La Cuenca in western Amazonia.
Previous hydrometric studies demonstrated the prevalence of overland flow as a hydrological pathway in the tropical rain forest catchment of South Creek, northeast Queensland. The purpose of this study was to consider this information in a mixing analysis with the aim of identifying sources of, and of estimating their contribution to, storm flow during two events in February 1993. K and acid-neutralizing capacity (ANC) were used as tracers because they provided the best separation of the potential sources, saturation overland flow, soil water from depths of 0.3, 0.6, and 1.2 m, and hillslope groundwater in a two-dimensional mixing plot. It was necessary to distinguish between saturation overland flow, generated at the soil surface and following unchanneled pathways, and overland flow in incised pathways. This latter type of overland flow was a mixture of saturation overland flow (event water) with high concentrations of K and a low ANC, soil water (preevent water) with low concentrations of K and a low ANC, and groundwater (preevent water) with low concentrations of K and a high ANC. The same sources explained the streamwater chemistry during the two events with strongly differing rainfall and antecedent moisture conditions. The contribution of saturation overland flow dominated the storm flow during the first, high-intensity, 178-mm event, while the contribution of soil water reached 50% during peak flow of the second, low-intensity, 44-mm event 5 days later. This latter result is remarkably similar to soil water contributions to storm flow in mountainous forested catchments of the southeastern United States. In terms of event and preevent water the storm flow hydrograph of the high-intensity event is dominated by event water and that of the low-intensity event by preevent water. This study highlights the problems of applying mixing analyses to overland flow-dominated catchments and soil environments with a poorly developed vertical chemical zonation and emphasizes the need for independent hydrometric information for a complete characterization of watershed hydrology and chemistry.
Chemical fingerprints of hydrological compartments and flow paths at La Cuenca, western Amazonia
(1995)
A forested first-order catchment in western Amazonia was monitored for 2 years to determine the chemical fingerprints of precipitation, throughfall, overland flow, pipe flow, soil water, groundwater, and streamflow. We used five tracers (hydrogen, calcium, magnesium, potassium, and silica) to distinguish “fast” flow paths mainly influenced by the biological subsystem from “slow” flow paths in the geochemical subsystem. The former comprise throughfall, overland flow, and pipe flow and are characterized by a high potassium/silica ratio; the latter are represented by soil water and groundwater, which have a low potassium/silica ratio. Soil water and groundwater differ with respect to calcium and magnesium. The groundwater-controlled streamflow chemistry is strongly modified by contributions from fast flow paths during precipitation events. The high potassium/silica ratio of these flow paths suggests that the storm flow response at La Cuenca is dominated by event water.
Earlier investigations at South Creek in northeastern Queensland established the importance of overland flow as a hydrologic pathway in this tropical rainforest environment. Since this pathway is ‘fast’, transmitting presumably ‘new’ water, its importance should be reflected in the stormflow chemistry of South Creek: the greater the volumentric contribution to the stormflow hydrograph, the more similarity between the chemical composition of streamwater and of overland flow is to be expected. Water samples were taken during two storm events in an ephemeral gully (gully A), an intermittent gully (gully B) and at the South Creek catchment outlet; additional spot checks were made in several poorly defined rills. The chemical composition of ‘old’ water was determined from 45 baseflow samples collected throughout February. The two events differed considerably in their magnitudes, intensities and antecedent moisture conditions. In both events, the stormflow chemistry in South Creek was characterized by a sharp decrease in Ca, Mg, Na, Si, Cl, EC, ANC, alkalinity and total inorganic carbon. pH remained nearly constant with discharge, whereas K increased sharply, as did sulfate in an ill-defined manner. In event 1, this South Creek stormflow pattern was closely matched by the pattern in gully A, implying a dominant contribution of ‘new’ water. This match was confirmed by the spot samples from rills. Gully B behaved like South Creek itself, but with a dampened ‘new’ water signal, indicating less overland flow generation in its subcatchment. In event 2, which occurred five days later, the initial ‘new’ water signal in gully A was rapidly overwhelmed by a different signal which is attributed to rapid drainage from a perched water table. This study shows that stormflow in this rainforest catchment consists predominantly of ‘new’ water which reaches the stream channel via ‘fast’ pathways. Where the ephemeral gullies delivering overland flow are incised deeply enough to intersect a perched water table, a delayed, ‘old’ water-like signal may be transmitted.
Just and Carpenter (1980) presented a theory of reading based on eye fixations wherein their "psycholinguistic" variables accounted for 72% of the variance in word gaze durations. This comment raises some statistical and theoretical problems with their use of simultaneous regression analysis of gaze duration measures and with the resulting theory of reading. A major problem was the confounding of perceptual with psycholinguistic factors. New eye fixation data are presented to support these criticisms. Analysis of fixations within words revealed that most gaze duration variance was contributed by number of fixations rather than by fixation duration.
The complement fragments C3a and C5a were purified from zymosan-activated human serum by column chromatographic procedures after the bulk of the proteins had been removed by acidic polyethylene glycol precipitation. In the isolated in situ perfused rat liver C3a increased glucose and lactate output and reduced flow. Its effects were enhanced in the presence of the carboxypeptidase inhibitor DL-mercaptomethyl-3-guanidinoethylthio-propanoic acid (MERGETPA) and abolished by preincubation of the anaphylatoxin with carboxypeptidase B or with Fab fragments of an anti-C3a monoclonal antibody. The C3a effects were partially inhibited by the thromboxane antagonist BM13505. C5a had no effect. It is concluded that locally but not systemically produced C3a may play an important role in the regulation of local metabolism and hemodynamics during inflammatory processes in the liver.
In the isolated rat liver perfused in situ stimulation of the nerve bundles around the portal vein and the hepatic artery caused an increase of urate formation that was inhibited by the α1-blocker prazosine and the xanthine oxidase inhibitor allopurinol. Moreover, nerve stimulation increased glucose and lactate output and decreased perfusion flow. Infusion of noradrenaline had similar effects. Compared to nerve stimulation infusion of glucagon led to a less pronounced increase of urate formation and a twice as large increase in glucose output but a decrease in lactate release without affecting the flow rate. Insulin had no effect on any of the parameters studied.
Increase in prostanoid formation in rat liver macrophages (Kupffer cells) by human anaphylatoxin C3a
(1993)
Human anaphylatoxin C3a increases glycogenolysis in perfused rat liver. This action is inhibited by prostanoid synthesis inhibitors and prostanoid antagonists. Because prostanoids but not anaphylatoxin C3a can increase glycogenolysis in hepatocytes, it has been proposed that prostanoid formation in nonparenchymal cells represents an important step in the C3a-dependent increase in hepatic glycogenolysis. This study shows that (a) human anaphylatoxin C3a (0.1 to 10 mug/ml) dose-dependently increased prostaglandin D2, thromboxane B, and prostaglandin F2alpha formation in rat liver macrophages (Kupffer cells); (b) the C3a-mediated increase in prostanoid formation was maximal after 2 min and showed tachyphylaxis; and (c) the C3a-elicited prostanoid formation could be inhibited specifically by preincubation of C3a with carboxypeptidase B to remove the essential C-terminal arginine or by preincubation of C3a with Fab fragments of a neutralizing monoclonal antibody. These data support the hypothesis that the C3a-dependent activation of hepatic glycogenolysis is mediated by way of a C3a-induced prostanoid production in Kupffer cells.
The effect of moderate rates of nitrogen deposition on ground floor vegetation is poorly predicted by uncontrolled surveys or fertilization experiments using high rates of nitrogen (N) addition. We compared the temporal trends of ground floor vegetation in permanent plots with moderate (7–13 kg ha−1 year−1) and lower bulk N deposition (4–6 kg ha−1 year−1) in southern Sweden during 1982–1998. We examined whether trends differed between growth forms (vascular plants and bryophytes) and vegetation types (three types of coniferous forest, deciduous forest, and bog). Trends of site-standardized cover and richness varied among growth forms, vegetation types, and deposition regions. Cover in spruce forests decreased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs cover decreased faster with low deposition. Cover of bryophytes in spruce forests increased at the same rate with both moderate and low deposition. In pine forests cover decreased faster with moderate deposition and in bogs and deciduous forests there was a strong non-linear increase with moderate deposition. The trend of number of vascular plants was constant with moderate and decreased with low deposition. We found no trend in the number of bryophyte species. We propose that the decrease of cover and number with low deposition was related to normal ecosystem development (increased shading), suggesting that N deposition maintained or increased the competitiveness of some species in the moderate-deposition region. Deposition had no consistent negative effect on vegetation suggesting that it is less important than normal successional processes.
Recent research has shown that the early lexical representations children establish in their second year of life already seem to be phonologically detailed enough to allow differentiation from very similar forms. In contrast to these findings children with specific language impairment show problems in discriminating phonologically similar word forms up to school age. In our study we investigated the question whether there would be differences in the processing of phonological details in normally developing and in children with low language performance in the second year of life. This was done by a retrospective study in which in the processing of phonological details was tested by a preferential looking experiment when the children were 19 months old. At the age of 30 months children were tested with a standardized German test of language comprehension and production (SETK2). The preferential looking data at 19 months revealed an opposite reaction pattern for the two groups: while the children scoring normally in the SETK2 increase their fixations of a pictured object only when it was named with the correct word, children with later low language performance did so only when presented with a phonologically slightly deviant mispronunciation. We suggest that this pattern does not point to a specific deficit in processing phonological information in these children but might be related to an instability of early phonological representations, and/or a generalized problem of information processing as compared to typically developing children.
Recent work has shown that English-learning 18-month-olds can detect the relationship between discontinuous morphemes such as is and -ing in Grandma is always running (Gomez, 2002; Santelmann & Jusczyk, 1998) but only at a maximum of 3 intervening syllables. In this article we examine the tracking of discontinuous dependencies in children acquiring German. Due to freer word order, German allows for greater distances between dependent elements and a greater syntactic variety of the intervening elements than English does. The aim of this study was to investigate whether factors other than distance may influence the child’s capacity to recognize discontinuous elements. Our findings provide evidence that children’s recognition capacities are affected not only by distance but also by their ability to linguistically analyze the material intervening between the dependent elements. We speculate that this result supports the existence of processing mechanisms that reduce a discontinuous relation to a local one based on subcategorization relations.
How do children determine the syntactic category of novel words? In this article we present the results of 2 experiments that investigated whether German children between 12 and 16 months of age can use distributional knowledge that determiners precede nouns and subject pronouns precede verbs to syntactically categorize adjacent novel words. Evidence from the head-turn preference paradigm shows that, although 12- to 13-month-olds cannot do this, 14- to 16-month-olds are able to use a determiner to categorize a following novel word as a noun. In contrast, no categorization effect was found for a novel word following a subject pronoun. To understand this difference we analyzed adult child-directed speech. This analysis showed that there are in fact stronger co-occurrence relations between determiners and nouns than between subject pronouns and verbs. Thus, in German determiners may be more reliable cues to the syntactic category of an adjacent novel word than are subject pronouns. We propose that the capacity to syntactically categorize novel words, demonstrated here for the first time in children this young, mediates between the recognition of the specific morphosyntactic frame in which a novel word appears and the word-to-world mapping that is needed to build up a semantic representation for the novel word.
Many methods have been proposed for the simulation of constrained mechanical systems. The most obvious of these have mild instabilities and drift problems. Consequently, stabilization techniques have been proposed A popular stabilization method is Baumgarte's technique, but the choice of parameters to make it robust has been unclear in practice. Some of the simulation methods that have been proposed and used in computations are reviewed here, from a stability point of view. This involves concepts of differential-algebraic equation (DAE) and ordinary differential equation (ODE) invariants. An explanation of the difficulties that may be encountered using Baumgarte's method is given, and a discussion of why a further quest for better parameter values for this method will always remain frustrating is presented. It is then shown how Baumgarte's method can be improved. An efficient stabilization technique is proposed, which may employ explicit ODE solvers in case of nonstiff or highly oscillatory problems and which relates to coordinate projection methods. Examples of a two-link planar robotic arm and a squeezing mechanism illustrate the effectiveness of this new stabilization method.
A Hamiltonian system in potential form (formula in the original abstract) subject to smooth constraints on q can be viewed as a Hamiltonian system on a manifold, but numerical computations must be performed in Rn. In this paper methods which reduce "Hamiltonian differential algebraic equations" to ODEs in Euclidean space are examined. The authors study the construction of canonical parameterizations or local charts as well as methods based on the construction of ODE systems in the space in which the constraint manifold is embedded which preserve the constraint manifold as an invariant manifold. In each case, a Hamiltonian system of ordinary differential equations is produced. The stability of the constraint invariants and the behavior of the original Hamiltonian along solutions are investigated both numerically and analytically.
Many methods have been proposed for the stabilization of higher index differential-algebraic equations (DAEs). Such methods often involve constraint differentiation and problem stabilization, thus obtaining a stabilized index reduction. A popular method is Baumgarte stabilization, but the choice of parameters to make it robust is unclear in practice. Here we explain why the Baumgarte method may run into trouble. We then show how to improve it. We further develop a unifying theory for stabilization methods which includes many of the various techniques proposed in the literature. Our approach is to (i) consider stabilization of ODEs with invariants, (ii) discretize the stabilizing term in a simple way, generally different from the ODE discretization, and (iii) use orthogonal projections whenever possible. The best methods thus obtained are related to methods of coordinate projection. We discuss them and make concrete algorithmic suggestions.
During the last few years there was a tremendous growth of scientific activities in the fields related to both Physics and Control theory: nonlinear dynamics, micro- and nanotechnologies, self-organization and complexity, etc. New horizons were opened and new exciting applications emerged. Experts with different backgrounds starting to work together need more opportunities for information exchange to improve mutual understanding and cooperation. The Conference "Physics and Control 2007" is the third international conference focusing on the borderland between Physics and Control with emphasis on both theory and applications. With its 2007 address at Potsdam, Germany, the conference is located for the first time outside of Russia. The major goal of the Conference is to bring together researchers from different scientific communities and to gain some general and unified perspectives in the studies of controlled systems in physics, engineering, chemistry, biology and other natural sciences. We hope that the Conference helps experts in control theory to get acquainted with new interesting problems, and helps experts in physics and related fields to know more about ideas and tools from the modern control theory.
An approach to the development of fluorescent probes to follow polymerizations in situ using fluorinated cross-conjugated enediynes (Y-enynes) is reported. Different substitution patterns in the Y-enynes result in distinct solvatochromic behavior. β,β-Bis(phenylethynyl)pentafluorostyrene 7, which bears no donor substituents and only fluorine at the styrene moiety, shows no solvatochromism. Donor substituted β,β-bis(3,4,5-trimethoxyphenylethynyl) pentafluorostyrene 8 and β,β-bis(4-butyl-2,3,5,6-tetrafluorophenylethynyl)-3,4,5-trimethoxystyrene 9 exhibit solvatochromism upon change of solvent polarity. Y-enyne 8 showed the largest solvatochromic shift (94 nm bathochromic shift) upon changing solvent from cyclohexane to acetonitrile. A smaller solvatochromic response (44 nm bathochromic shift) was observed for 9. Lippert–Mataga treatment of 8 and 9 yields slopes of -10,800 and -6,400 cm -1, respectively. This corresponds to a change in dipole moment of 9.6 and 6.9 D, respectively. The solvatochromic behavior in 8 and 9 supports the formation of an intramolecular charge transfer (ICT) state. The low fluorescence quantum yields are caused by competitive double bond rotation. The fluorescence decay time of 9 decreases in methyltetrahydrofuran from 2.1 ns at 77 K to 0.11 ns at 200 K. Efficient single bond rotation in 9 was frozen at -50 °C in a configuration in which the trimethoxyphenyl ring is perpendicular to the fluorinated rings. 7–9 are photostable compounds. The X-ray structure of 7 shows it is not planar and that its conjugation is distorted. Y-enyne 7 stacks in the solid state showing coulombic, actetylene–arene, and fluorine–π interactions.
Investigations with frequency domain photon density waves allow elucidation of absorption and scattering properties of turbid media. The temporal and spatial propagation of intensity modulated light with frequencies up to more than 1 GHz can be described by the P1 approximation to the Boltzmann transport equation. In this study, we establish requirements for the appropriate choice of turbid model media and characterize mixtures of isosulfan blue as absorber and polystyrene beads as scatterer. For these model media, the independent determination of absorption and reduced scattering coefficients over large absorber and scatterer concentration ranges is demonstrated with a frequency domain photon density wave spectrometer employing intensity and phase measurements at various modulation frequencies.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
The end of the cold war division of the Baltic Sea in 1989, and the three Baltic states’ return to independence in 1991 created new opportunities for the decision-makers of the area, as well as new possibilities for fashioning security in the region. This article will examine the security debate affecting the Baltic Sea region in the post-cold war period, and in particular, the relevance of the European Union to that debate. The following section will examine various concepts of security relevant to the Baltic region; the third section looks at the EU and the Baltic area; and the last part deals with the implications that EU membership by the Baltic Sea states may have for the security of the Baltic Sea zone.
The article mobilises the concept of strategic culture in order to identify the impact of history upon contemporary security policy. The article will first look at the "wholesale construction" of a strategic culture after the Second World War in West Germany before exploring its impact upon security policy since the end of the Cold War in two areas: the Bundeswehr's out-of-area role and conscription. The central argument presented here is that the strategic culture of the former Federal Republic now writ large on to the new united Germany sets the context within which security policies are designed. This strategic culture, as will be argued, acts as both a facilitating and a restraining variable on behaviour, making certain policy options possible and others impossible.
Observers of international politics have been conscious of the growing international involvement of non-central governments (NCGs), particularly in federal systems. These have been supplemented by the internationalisation of subnational actors in quasi-federal and even unitary states. One of the difficulties is that analysis has often been locked into the dominant paradigm debate in International Relations concerning who and who are not significant actors. Having briefly explored the nature of this changing environment, marked by a growing emphasis on access rather than control as a policy objective and the emergence of what is termed a 'catalytic diplomacy', the discussion focuses on the need for linkage between the levels of government in the pursuit of international as well as domestic policy goals. The nature of linkage mechanisms are discussed.
On the basis of the Dynamic Syntax framework, this paper argues that the production pressures in dialogue determining alignment effects and given versus new informational effects also drive the shift from case-rich free word order systems without clitic pronouns into systems with clitic pronouns with rigid relative ordering. The paper introduces assumptions of Dynamic Syntax, in particular the building up of interpretation through structural underspecification and update, sketches the attendant account of production with close coordination of parsing and production strategies, and shows how what was at the Latin stage a purely pragmatic, production-driven decision about linear ordering becomes encoded in the clitics in theMedieval Spanish system which then through successive steps of routinization yield the modern systems with immediately pre-verbal fixed clitic templates.
We analyze anaphoric phenomena in the context of building an input understanding component for a conversational system for tutoring mathematics. In this paper, we report the results of data analysis of two sets of corpora of dialogs on mathematical theorem proving. We exemplify anaphoric phenomena, identify factors relevant to anaphora resolution in our domain and extensions to the input interpretation component to support it.
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
Goal-oriented dialog as a collaborative subordinated activity involving collective acceptance
(2006)
Modeling dialog as a collaborative activity consists notably in specifying the contain of the Conversational Common Ground and the kind of social mental state involved. In previous work (Saget, 2006), we claim that Collective Acceptance is the proper social attitude for modeling Conversational Common Ground in the particular case of goal-oriented dialog. We provide a formalization of Collective Acceptance, besides elements in order to integrate this attitude in a rational model of dialog are provided; and finally, a model of referential acts as being part of a collaborative activity is provided. The particular case of reference has been chosen in order to exemplify our claims.
A key problem for models of dialogue is to explain the mechanisms involved in generating and responding to clarification requests. We report a 'Maze task' experiment that investigates the effect of 'spoof' clarification requests on the development of semantic co-ordination. The results provide evidence of both local and global semantic co-ordination phenomena that are not captured by existing dialogue co-ordination models.
How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We ran a series of experiments on multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. The lexicon diverges at a faster rate for an introspective population, eventually collapsing to one single form which is associated with all meanings. This contrasts sharply with MCI capable populations in which a lexicon is maintained, where every meaning is associated with a unique word. We also investigated the effect of increasing the meaning space and showed that it speeds up the lexicon divergence for all populations irrespective of their acquisition method.
Demonstratives, in particular gestures that "only" accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VRapplications increases the need for theories ofmultimodal structures and events. In our workshopcontribution we focus on the integration of multimodal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.
Classical SDRT (Asher and Lascarides, 2003) discussed essential features of dialogue like adjacency pairs or corrections and up-dating. Recent work in SDRT (Asher, 2002, 2005) aims at the description of natural dialogue. We use this work to model situated communication, i.e. dialogue, in which sub-sentential utterances and gestures (pointing and grasping) are used as conventional modes of communication. We show that in addition to cognitive modelling in SDRT, capturing mental states and speech-act related goals, special postulates are needed to extract meaning out of contexts. Gestural meaning anchors Discourse Referents in contextually given domains. Both sorts of meaning are fused with the meaning of fragments to get at fully developed dialogue moves. This task accomplished, the standard SDRT machinery, tagged SDRSs, rhetorical relations, the up-date mechanism, and the Maximize Discourse Coherence constraint generate coherent structures. In sum, meanings from different verbal and non-verbal sources are assembled using extended SDRT to form coherent wholes.
We present a formal analysis of iconic coverbal gesture. Our model describes the incomplete meaning of gesture that’s derivable from its form, and the pragmatic reasoning that yields a more specific interpretation. Our formalism builds on established models of discourse interpretation to capture key insights from the descriptive literature on gesture: synchronous speech and gesture express a single thought, but while the form of iconic gesture is an important clue to its interpretation, the content of gesture can be resolved only by linking it to its context.
In two experiments, many annotators marked antecedents for discourse deixis as unconstrained regions of text. The experiments show that annotators do converge on the identity of these text regions, though much of what they do can be captured by a simple model. Demonstrative pronouns are more likely than definite descriptions to be marked with discourse antecedents. We suggest that our methodology is suitable for the systematic study of discourse deixis.
We present a new analysis of illocutionary forces in dialogue. We analyze them as complex conversational moves involving two dimensions: what Speaker commits herself to and what she calls on Addressee to perform. We start from the analysis of speech acts such as confirmation requests or whimperatives, and extend the analysis to seemingly simple speech acts, such as statements and queries. Then, we show how to integrate our proposal in the framework of the Grammar for Conversation (Ginzburg, to app.), which is adequate for modelling agents' information states and how they get updated.
Claiming that cross-speaker "but" can signal correction in dialogue, we start by describing the types of corrections "but" can communicate by focusing on the Speech Act (SA) communicated in the previous turn and address the ways in which "but" can correct what is communicated. We address whether "but" corrects the proposition, the direct SA or the discourse relation communicated in the previous turn. We will also briefly address other relations signalled by cross-turn "but". After presenting a typology of the situations "but" can correct, we will address how these corrections can be modelled in the Information State model of dialogue, motivating this work by showing how it can be used to potentially avoid misunderstandings. We wrap up by showing how the model presented here updates beliefs in the Information State representation of the dialogue and can be used to facilitate response deliberation.
An account is presented of the focus properties, common ground effect and dialogue behaviour of the accented German discourse marker "doch" and the accented sentence negation "nicht". It is argued that "doch" and "nicht" evoke as a focus alternative the logical complement of the proposition expressed by the sentence in which they occur, and that an analysis in terms of contrastive focus accounts for their effect on the common ground and their function in dialogue.
Improvement of a fluorescence immunoassay with a compact diode-pumped solid state laser at 315 nm
(2006)
We demonstrate the improvement of fluorescence immunoassay (FIA) diagnostics in deploying a newly developed compact diode-pumped solid state (DPSS) laser with emission at 315 nm. The laser is based on the quasi-three-level transition in Nd:YAG at 946 nm. The pulsed operation is either realized by an active Q-switch using an electro-optical device or by introduction of a Cr<SUP>4+</SUP>:YAG saturable absorber as passive Q-switch element. By extra-cavity second harmonic generation in different nonlinear crystal media we obtained blue light at 473 nm. Subsequent mixing of the fundamental and the second harmonic in a β-barium-borate crystal provided pulsed emission at 315 nm with up to 20 μJ maximum pulse energy and 17 ns pulse duration. Substitution of a nitrogen laser in a FIA diagnostics system by the DPSS laser succeeded in considerable improvement of the detection limit. Despite significantly lower pulse energies (7 μJ DPSS laser versus 150 μJ nitrogen laser), in preliminary investigations the limit of detection was reduced by a factor of three for a typical FIA.
Two examples of our biophotonic research utilizing nanoparticles are presented, namely laser-based fluoroimmuno analysis and in-vivo optical oxygen monitoring. Results of the work include significantly enhanced sensitivity of a homogeneous fluorescence immunoassay and markedly improved spatial resolution of oxygen gradients in root nodules of a legume species.
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several "semiotic layers", modalities of information such as syntax, discourse structure, gesture, and intonation. We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices. Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting.
This paper investigates the structural properties of morphosyntactically marked focus constructions, focussing on the often neglected non-focal sentence part in African tone languages. Based on new empirical evidence from five Gur and Kwa languages, we claim that these focus expressions have to be analysed as biclausal constructions even though they do not represent clefts containing restrictive relative clauses. First, we relativize the partly overgeneralized assumptions about structural correspondences between the out-of-focus part and relative clauses, and second, we show that our data do in fact support the hypothesis of a clause coordinating pattern as present in clause sequences in narration. It is argued that we deal with a non-accidental, systematic feature and that grammaticalization may conceal such basic narrative structures.
The Semantics of Ellipsis
(2005)
There are four phenomena that are particularly troublesome for theories of ellipsis: the existence of sloppy readings when the relevant pronouns cannot possibly be bound; an ellipsis being resolved in such a way that an ellipsis site in the antecedent is not understood in the way it was there; an ellipsis site drawing material from two or more separate antecedents; and ellipsis with no linguistic antecedent. These cases are accounted for by means of a new theory that involves copying syntactically incomplete antecedent material and an analysis of silent VPs and NPs that makes them into higher order definite descriptions that can be bound into.
We present a system for the linguistic exploration and analysis of lexical cohesion in English texts. Using an electronic thesaurus-like resource, Princeton WordNet, and the Brown Corpus of English, we have implemented a process of annotating text with lexical chains and a graphical user interface for inspection of the annotated text. We describe the system and report on some sample linguistic analyses carried out using the combined thesaurus-corpus resource.
Fronting of an infinite VP across a finite main verb-akin to German "VP-topicalization"-can be found also in Czech and Polish. The paper discusses evidence from large corpora for this process and some of its properties, both syntactic and information-structural. Based on this case, criteria for more user-friedly searching and retrieval of corpus data in syntactic research are being developed.
Multiple hierarchies
(2005)
In this paper, we present the Multiple Annotation approach, which solves two problems: the problem of annotating overlapping structures, and the problem that occurs when documents should be annotated according to different, possibly heterogeneous tag sets. This approach has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. The files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) are described. These representations serve as a base for several applications.
This paper describes the standardization problems that come up in a diachronic corpus: it has to cope with differing standards with regard to diplomaticity, annotation, and header information. Such highly het-erogeneous texts must be standardized to allow for comparative re-search without (too much) loss of information.
Unity in diversity
(2005)
This paper describes the creation and preparation of TUSNELDA, a collection of corpus data built for linguistic research. This collection contains a number of linguistically annotated corpora which differ in various aspects such as language, text sorts / data types, encoded annotation levels, and linguistic theories underlying the annotation. The paper focuses on this variation on the one hand and the way how these heterogeneous data are integrated into one resource on the other hand.
ANNIS
(2004)
In this paper, we discuss the design and implementation of our first version of the database "ANNIS" ("ANNotation of Information Structure"). For research based on empirical data, ANNIS provides a uniform environment for storing this data together with its linguistic annotations. A central database promotes standardized annotation, which facilitates interpretation and comparison of the data. ANNIS is used through a standard web browser and offers tier-based visualization of data and annotations, as well as search facilities that allow for cross-level and cross-sentential queries. The paper motivates the design of the system, characterizes its user interface, and provides an initial technical evaluation of ANNIS with respect to data size and query processing.
Focus strategies in chadic
(2004)
We argue that the standard focus theories reach their limits when confronted with the focus systems of the Chadic languages. The backbone of the standard focus theories consists of two assumptions, both called into question by the languages under consideration. Firstly, it is standardly assumed that focus is generally marked by stress. The Chadic languages, however, exhibit a variety of different devices for focus marking. Secondly, it is assumed that focus is always marked. In Tangale, at least, focus is not marked consistently on all types of constituents. The paper offers two possible solutions to this dilemma.
We argue that there is a crucial difference between determiner and adverbial quantification. Following Herburger [2000] and von Fintel [1994], we assume that determiner quantifiers quantify over individuals and adverbial quantifiers over eventualities. While it is usually assumed that the semantics of sentences with determiner quantifiers and those with adverbial quantifiers basically come out the same, we will show by way of new data that quantification over events is more restricted than quantification over individuals. This is because eventualities in contrast to individuals have to be located in time which is done using contextual information according to a pragmatic resolution strategy. If the contextual information and the tense information given in the respective sentence contradict each other, the sentence is uninterpretable. We conclude that this is the reason why in these cases adverbial quantification, i.e. quantification over eventualities, is impossible whereas quantification over individuals is fine.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.
In semi-arid savannas, unsustainable land use can lead to degradation of entire landscapes, e.g. in the form of shrub encroachment. This leads to habitat loss and is assumed to reduce species diversity. In BIOTA phase 1, we investigated the effects of land use on population dynamics on farm scale. In phase 2 we scale up to consider the whole regional landscape consisting of a diverse mosaic of farms with different historic and present land use intensities. This mosaic creates a heterogeneous, dynamic pattern of structural diversity at a large spatial scale. Understanding how the region-wide dynamic land use pattern affects the abundance of animal and plant species requires the integration of processes on large as well as on small spatial scales. In our multidisciplinary approach, we integrate information from remote sensing, genetic and ecological field studies as well as small scale process models in a dynamic region-wide simulation tool. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006.
Decisions for the conservation of biodiversity and sustainable management of natural resources are typically related to large scales, i.e. the landscape level. However, understanding and predicting the effects of land use and climate change on scales relevant for decision-making requires to include both, large scale vegetation dynamics and small scale processes, such as soil-plant interactions. Integrating the results of multiple BIOTA subprojects enabled us to include necessary data of soil science, botany, socio-economics and remote sensing into a high resolution, process-based and spatially-explicit model. Using an example from a sustainably-used research farm and a communally used and degraded farming area in semiarid southern Namibia we show the power of simulation models as a tool to integrate processes across disciplines and scales.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
The rigorous development, application and validation of distributed hydrological models obligates to evaluate data in a spatially distributed way. In particular, spatial model predictions such as the distribution of soil moisture, runoff generating areas or nutrient-contributing areas or erosion rates, are to be assessed against spatially distributed observations. Also model inputs, such as the distribution of modelling units derived by GIS and remote sensing analyses, should be evaluated against groundbased observations of landscape characteristics. So far, however, quantitative methods of spatial field comparison have rarely been used in hydrology. In this paper, we present algorithms that allow to compare observed and simulated spatial hydrological data. The methods can be applied for binary and categorical data on regular grids. They comprise cell-by-cell algorithms, cell-neighbourhood approaches that account for fuzziness of location, and multi-scale algorithms that evaluate the similarity of spatial fields with changing resolution. All methods provide a quantitative measure of the similarity of two maps. The comparison methods are applied in two mountainous catchments in southern Germany (Brugga, 40 km<sup>2) and Austria (Löhnersbach, 16 km<sup>2). As an example of binary hydrological data, the distribution of saturated areas is analyzed in both catchments. For categorical data, vegetation zones that are associated with different runoff generation mechanisms are analyzed in the Löhnersbach. Mapped spatial patterns are compared to simulated patterns from terrain index calculations and from satellite image analysis. It is discussed how particular features of visual similarity between the spatial fields are captured by the quantitative measures, leading to recommendations on suitable algorithms in the context of evaluating distributed hydrological models.
Integration of digital elevation models and satellite images to investigate geological processes.
(2006)
In order to better understand the geological boundary conditions for ongoing or past surface processes geologists face two important questions: 1) How can we gain additional knowledge about geological processes by analyzing digital elevation models (DEM) and satellite images and 2) Do these efforts present a viable approach for more efficient research. Here, we will present case studies at a variety of scales and levels of resolution to illustrate how we can substantially complement and enhance classical geological approaches with remote sensing techniques. Commonly, satellite and DEM based studies are being used in a first step of assessing areas of geologic interest. While in the past the analysis of satellite imagery (e.g. Landsat TM) and aerial photographs was carried out to characterize the regional geologic characteristics, particularly structure and lithology, geologists have increasingly ventured into a process-oriented approach. This entails assessing structures and geomorphic features with a concept that includes active tectonics or tectonic activity on time scales relevant to humans. In addition, these efforts involve analyzing and quantifying the processes acting at the surface by integrating different remote sensing and topographic data (e.g. SRTM-DEM, SSM/I, GPS, Landsat 7 ETM, Aster, Ikonos…). A combined structural and geomorphic study in the hyperarid Atacama desert demonstrates the use of satellite and digital elevation data for assessing geological structures formed by long-term (millions of years) feedback mechanisms between erosion and crustal bending (Zeilinger et al., 2005). The medium-term change of landscapes during hundred thousands to millions years in a more humid setting is shown in an example from southern Chile. Based on an analysis of rivers/watersheds combined with landscapes parameterization by using digital elevation models, the geomorphic evolution and change in drainage pattern in the coastal Cordillera can be quantified and put into the context of seismotectonic segmentation of a tectonically active region. This has far-reaching implications for earthquake rupture scenarios and hazard mitigation (K. Rehak, see poster on IMAF Workshop). Two examples illustrate short-term processes on decadal, centennial and millennial time scales: One study uses orogen scale precipitation gradients derived from remotely sensed passive microwave data (Bookhagen et al., 2005a). They demonstrate how debris flows were triggered as a response of slopes to abnormally strong rainfall in the interior parts of the Himalaya during intensified monsoons. The area of the orogen that receives high amounts of precipitation during intensified monsoons also constitutes numerous landslide deposits of up to 1km<sup>3 volume that were generated during intensified monsoon phase at about 27 and 9 ka (Bookhagen et al., 2005b). Another project in the Swiss Alps compared sets of aerial photographs recorded in different years. By calculating high resolution surfaces the mass transport in a landslide could be reconstructed (M. Schwab, Universität Bern). All these examples, although representing only a short and limited selection of projects using remote sense data in geology, have as a common approach the goal to quantify geological processes. With increasing data resolution and new sensors future projects will even enable us to recognize more patterns and / or structures indicative of geological processes in tectonically active areas. This is crucial for the analysis of natural hazards like earthquakes, tsunamis and landslides, as well as those hazards that are related to climatic variability. The integration of remotely sensed data at different spatial and temporal scales with field observations becomes increasingly important. Many of presently highly populated places and increasingly utilized regions are subject to significant environmental pressure and often constitute areas of concentrated economic value. Combined remote sensing and ground-truthing in these regions is particularly important as geologic, seismicity and hydrologic data may be limited here due to the recency of infrastructural development. Monitoring ongoing processes and evaluating the remotely sensed data in terms of recurrence of events will greatly enhance our ability to assess and mitigate natural hazards. <hr> Dokument 1: Foliensatz | Dokument 2: Abstract <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006