Refine
Year of publication
Document Type
- Article (20578)
- Doctoral Thesis (3139)
- Postprint (2320)
- Monograph/Edited Volume (1207)
- Other (638)
- Review (612)
- Preprint (528)
- Conference Proceeding (445)
- Part of a Book (213)
- Working Paper (171)
Language
- English (30061) (remove)
Keywords
- climate change (168)
- Germany (96)
- machine learning (73)
- diffusion (72)
- German (66)
- morphology (66)
- Arabidopsis thaliana (64)
- anomalous diffusion (56)
- stars: massive (55)
- Climate change (53)
Institute
- Institut für Physik und Astronomie (4861)
- Institut für Biochemie und Biologie (4604)
- Institut für Geowissenschaften (3209)
- Institut für Chemie (2873)
- Institut für Mathematik (1855)
- Department Psychologie (1464)
- Institut für Ernährungswissenschaft (1009)
- Department Linguistik (992)
- Wirtschaftswissenschaften (851)
- Institut für Informatik und Computational Science (834)
It is desirable to reduce the potential threats that result from the variability of nature, such as droughts or heat waves that lead to food shortage, or the other extreme, floods that lead to severe damage. To prevent such catastrophic events, it is necessary to understand, and to be capable of characterising, nature's variability. Typically one aims to describe the underlying dynamics of geophysical records with differential equations. There are, however, situations where this does not support the objectives, or is not feasible, e.g., when little is known about the system, or it is too complex for the model parameters to be identified. In such situations it is beneficial to regard certain influences as random, and describe them with stochastic processes. In this thesis I focus on such a description with linear stochastic processes of the FARIMA type and concentrate on the detection of long-range dependence. Long-range dependent processes show an algebraic (i.e. slow) decay of the autocorrelation function. Detection of the latter is important with respect to, e.g. trend tests and uncertainty analysis. Aiming to provide a reliable and powerful strategy for the detection of long-range dependence, I suggest a way of addressing the problem which is somewhat different from standard approaches. Commonly used methods are based either on investigating the asymptotic behaviour (e.g., log-periodogram regression), or on finding a suitable potentially long-range dependent model (e.g., FARIMA[p,d,q]) and test the fractional difference parameter d for compatibility with zero. Here, I suggest to rephrase the problem as a model selection task, i.e.comparing the most suitable long-range dependent and the most suitable short-range dependent model. Approaching the task this way requires a) a suitable class of long-range and short-range dependent models along with suitable means for parameter estimation and b) a reliable model selection strategy, capable of discriminating also non-nested models. With the flexible FARIMA model class together with the Whittle estimator the first requirement is fulfilled. Standard model selection strategies, e.g., the likelihood-ratio test, is for a comparison of non-nested models frequently not powerful enough. Thus, I suggest to extend this strategy with a simulation based model selection approach suitable for such a direct comparison. The approach follows the procedure of a statistical test, with the likelihood-ratio as the test statistic. Its distribution is obtained via simulations using the two models under consideration. For two simple models and different parameter values, I investigate the reliability of p-value and power estimates obtained from the simulated distributions. The result turned out to be dependent on the model parameters. However, in many cases the estimates allow an adequate model selection to be established. An important feature of this approach is that it immediately reveals the ability or inability to discriminate between the two models under consideration. Two applications, a trend detection problem in temperature records and an uncertainty analysis for flood return level estimation, accentuate the importance of having reliable methods at hand for the detection of long-range dependence. In the case of trend detection, falsely concluding long-range dependence implies an underestimation of a trend and possibly leads to a delay of measures needed to take in order to counteract the trend. Ignoring long-range dependence, although present, leads to an underestimation of confidence intervals and thus to an unjustified belief in safety, as it is the case for the return level uncertainty analysis. A reliable detection of long-range dependence is thus highly relevant in practical applications. Examples related to extreme value analysis are not limited to hydrological applications. The increased uncertainty of return level estimates is a potentially problem for all records from autocorrelated processes, an interesting examples in this respect is the assessment of the maximum strength of wind gusts, which is important for designing wind turbines. The detection of long-range dependence is also a relevant problem in the exploration of financial market volatility. With rephrasing the detection problem as a model selection task and suggesting refined methods for model comparison, this thesis contributes to the discussion on and development of methods for the detection of long-range dependence.
In the modern industrialized countries every year several hundred thousands of people die due to the sudden cardiac death. The individual risk for this sudden cardiac death cannot be defined precisely by common available, non-invasive diagnostic tools like Holter-monitoring, highly amplified ECG and traditional linear analysis of heart rate variability (HRV). Therefore, we apply some rather unconventional methods of nonlinear dynamics to analyse the HRV. Especially, some complexity measures that are basing on symbolic dynamics as well as a new measure, the renormalized entropy, detect some abnormalities in the HRV of several patients who have been classified in the low risk group by traditional methods. A combination of these complexity measures with the parameters in the frequency domain seems to be a promising way to get a more precise definition of the individual risk. These findings have to be validated by a representative number of patients.
We have used techniques of nonlinear dynamics to compare a special model for the reversals of the Earth's magnetic field with the observational data. Although this model is rather simple, there is no essential difference to the data by means of well-known characteristics, such as correlation function and probability distribution. Applying methods of symbolic dynamics we have found that the considered model is not able to describe the dynamical properties of the observed process. These significant differences are expressed by algorithmic complexity and Renyi information.
Two deterministic processes leading to roughening interfaces are considered. It is shown that the dynamics of linear perturbations of turbulent regimes in coupled map lattices is governed by a discrete version of the Kardar-Parisi-Zhang equation. The asymptotic scaling behavior of the perturbation field is investigated in the case of large lattices. Secondly, the dynamics of an order-disorder interface is modelled with a simple two-dimensional coupled map lattice, possesing a turbulent and a laminar state. It is demonstrated, that in some range of parameters the spreading of the turbulent state is accompanied by kinetic roughening of the interface.
The use of unilateral force under George W. Bush is not a new phenomenon in US foreign policy. As the author argues, it is merely a continuation of Bill Clinton’s foreign policy and is deeply rooted in both the foreign policy traditions of Jacksonianism and Wilsonianism. The analysis concludes that Clinton used unilateralist foreign policy with a 'smile' whereas the Bush administration uses it with an attitude.
Strange nonchaotic attractors typically appear in quasiperiodically driven nonlinear systems. Two methods of their characterization are proposed. The first one is based on the bifurcation analysis of the systems, resulting from periodic approximations of the quasiperiodic forcing. Secondly, we propose th characterize their strangeness by calculating a phase sensitivity exponent, that measures the sensitivity with respect to changes of the phase of the external force. It is shown, that phase sensitivity appears if there is a non-zero probability for positive local Lyapunov exponents to occur.
We have studied bifurcation phenomena for the incompressable Navier-Stokes equations in two space dimensions with periodic boundary conditions. Fourier representations of velocity and pressure have been used to transform the original partial differential equations into systems of ordinary differential equations (ODE), to which then numerical methods for the qualitative analysis of systems of ODE have been applied, supplemented by the simulative calculation of solutions for selected initial conditions. Invariant sets, notably steady states, have been traced for varying Reynolds number or strength of the imposed forcing, respectively. A complete bifurcation sequence leading to chaos is described in detail, including the calculation of the Lyapunov exponents that characterize the resulting chaotic branch in the bifurcation diagram.
During the last decades, the global change of the environment has caused a dramatic loss of habitats and species. In Central Europe, open habitats are particularly affected. The main objective of this thesis was to experimentally test the suitability of wild megaherbivore grazing as a conservation tool to manage open habitats. We studied the effect of wild ungulates in a 160 ha game preserve in NE Germany in three successional stages (i) Corynephorus canescens-dominated grassland, (ii) ruderal tall forb vegetation dominated by Tanacetum vulgare and (iii) Pinus sylvestris-pioneer forest over three years. Our results demonstrate that wild megaherbivores considerably affected species composition and delayed successional pathways in open habitats. Grazing effects differed considerably between successional stages: species richness was higher in grazed ruderal and pioneer forest plots, but not in the Corynephorus sites. Species composition changed significantly in the Corynephorus and ruderal sites. Grazed ruderal sites had turned into sites with very short vegetation dominated by Agrostis spp. and the moss Brachythecium albicans, most species did not flower. Woody plant cover was significantly affected only in the pioneer forest sites. Young pine trees were severely damaged and tree height was considerably reduced, leading to a “Pinus-macchie”-appearance. Ecological patterns and processes are known to vary with spatial scale. Since grazing by megaherbivores has a strong spatial component, the scale of monitoring success of grazing may largely differ among and within different systems. Thus, the second aim of this thesis was to test whether grazing effects are consistent over different spatial scales, and to give recommendations for appropriate monitoring scales. For this purpose, we studied grazing effects on plant community structure using multi-scale plots that included three nested spatial scales (0.25 m2, 4 m2, and 40 m2). Over all vegetation types, the scale of observation directly affected grazing effects on woody plant cover and on floristic similarity, but not on the proportion of open soil and species richness. Grazing effects manifested at small scales regarding floristic similarity in pioneer forest and ruderal sites and regarding species richness in ruderal sites. The direction of scale-effects on similarity differed between vegetation types: Grazing effects on floristic similarity in the Corynephorus sites were significantly higher at the medium and large scale, while in the pioneer forest sites they were significantly higher at the smallest scale. Disturbances initiate vegetation changes by creating gaps and affecting colonization and extinction rates. The third intention of the thesis was to investigate the effect of small-scale disturbances on the species-level. In a sowing experiment, we studied early establishment probabilities of Corynephorus canescens, a key species of open sandy habitats. Applying two different regimes of mechanical ground disturbance (disturbed and undisturbed) in the three successional stages mentioned above, we focused on the interactive effects of small-scale disturbances, successional stage and year-to-year variation. Disturbance led to higher emergence in a humid and to lower emergence in a very dry year. Apparently, when soil moisture was sufficient, the main factor limiting C. canescens establishment was competition, while in the dry year water became the limiting factor. Survival rates were not affected by disturbance. In humid years, C. canescens emerged in higher numbers in open successional stages while in the dry year, emergence rates were higher in late stages, suggesting an important role of late successional stages for the persistence of C. canescens. We conclude that wild ungulate grazing is a useful tool to slow down succession and to preserve a species-rich, open landscape, because it does not only create disturbances, thereby supporting early successional stages, but at the same time efficiently controls woody plant cover. However, wild ungulate grazing considerably changed the overall appearance of the landscape. Additional measures like shifting exclosures might be necessary to allow vulnerable species to flower and reproduce. We further conclude that studying grazing impacts on a range of scales is crucial, since different parameters are affected at different spatial scales. Larger scales are suitable for assessing grazing impact on structural parameters like the proportion of open soil or woody plant cover, whereas species richness and floristic similarity are affected at smaller scales. Our results further indicate that the optimal strategy for promoting C. canescens is to apply disturbances just before seed dispersal and not during dry years. Further, at the landscape scale, facilitation by late successional species may be an important mechanism for the persistence of protected pioneer species.
This contribution describes a generator of stochastic time series of daily precipitation for the interior of Israel from c. 90 to 900 mm mean annual precipitation (MAP) as a tool for studies of daily rain variability. The probability of rainfall on a given day of the year is described by a regular Gaussian peak curve function. The amount of rain is drawn randomly from an exponential distribution whose mean is the daily mean rain amount (averaged across years for each day of the year) described by a flattened Gaussian peak curve. Parameters for the curves have been calculated from monthly aggregated, long-term rain records from seven meteorological stations. Parameters for arbitrary points on the MAP gradient are calculated from a regression equation with MAP as the only independent variable. The simple structure of the generator allows it to produce time series with daily rain patterns that are projected under climate change scenarios and simultaneously control MAP. Increasing within-year variability of daily precipitation amounts also increases among-year variability of MAP as predicted by global circulation models. Thus, the time series incorporate important characteristics for climate change research and represent a flexible tool for simulations of daily vegetation or surface hydrology dynamics.
Aim The aim of the present study was to examine young female volleyballers’ body build, physical abilities, technical skills and psychophysiological properties in relation to their performance at competitions. The sample consisted of 46 female volleyballers aged 13-16 years. 49 basic anthropometric measurements were measured and 65 proportions and body composition characteristics were calculated. 9 physical ability tests, 9 volleyball technical skills tests and 21 psychophysiological tests were carried out. The game performance was recorded by the computer program Game. The program enabled to fix the performance of technical elements in case of each player. The computer program Game calculated the index of proficiency in case of each girl for each element. The first control group consisted of 74 female volleyballers aged 13–15 years with whom reduced anthropometry was provided and 28 games were recorded. The second control group consisted of 586 ordinary schoolgirls aged 13–16 years with whom full anthropometry was provided. Results In order to systematize all anthropometric characteristics, we first studied the essence of the anthropometric structure of the body as a whole. It turned out to be a characteristic system where all variables are in significant correlation between one another and where the leading characteristics are height and weight. Therefore we based the classification on the mean height and weight of the whole sample. We formed a 5 class SD classification. There are three classes of concordance between height and weight: small height – small weight, medium height – medium height, big height – big weight. The other two classes were classes of disconcordance between height and weight- pycnomorphs and leptomorphs. We managed to show that gradual increase in height and weight brought about statistically significant increase in length, breadth and depth measurements, circumferences, bone thicknesses and skinfolds. There were also systematic changes in indeces and body composition characteristics. Pycnomorphs and leptomorphs also showed differences specific to their body types in body measurements and body composition. The results of all tests were submitted to basic statistical analysis and all correlations were found between all the tests (volleyball technical skills, psychophysiological abilities, physical abilities), and all basic anthropometric variables (n = 49) and all proportions and body composition characteristics (n = 65). All anthropometric measurements and test results were correlated with the index of proficiency for all elements of the game. The best linear regression models were calculated for predicting proficiency in different elements of the game. We can see that body build and all kind of tests took part in predicting the proficiency of the game. The most essential for performing attack, block and feint were anthropometric and psychophysiological models. The studied complex of body build characteristics and tests results determine the players’ proficiency at competitions, are an important tool for testing the player’s individual development, enable to choose volleyballers from among schoolgirls and represent the whole body constitutional model of a young female volleyballer. Outlook Our outlook for the future is to continue recording of all Estonian championship games with the computer program Game, to continue the players’ anthropometric measuring and psychophysiological testing at competitions and to compile a national register for assessment of development of individual players and teams.
An approach to the development of fluorescent probes to follow polymerizations in situ using fluorinated cross-conjugated enediynes (Y-enynes) is reported. Different substitution patterns in the Y-enynes result in distinct solvatochromic behavior. β,β-Bis(phenylethynyl)pentafluorostyrene 7, which bears no donor substituents and only fluorine at the styrene moiety, shows no solvatochromism. Donor substituted β,β-bis(3,4,5-trimethoxyphenylethynyl) pentafluorostyrene 8 and β,β-bis(4-butyl-2,3,5,6-tetrafluorophenylethynyl)-3,4,5-trimethoxystyrene 9 exhibit solvatochromism upon change of solvent polarity. Y-enyne 8 showed the largest solvatochromic shift (94 nm bathochromic shift) upon changing solvent from cyclohexane to acetonitrile. A smaller solvatochromic response (44 nm bathochromic shift) was observed for 9. Lippert–Mataga treatment of 8 and 9 yields slopes of -10,800 and -6,400 cm -1, respectively. This corresponds to a change in dipole moment of 9.6 and 6.9 D, respectively. The solvatochromic behavior in 8 and 9 supports the formation of an intramolecular charge transfer (ICT) state. The low fluorescence quantum yields are caused by competitive double bond rotation. The fluorescence decay time of 9 decreases in methyltetrahydrofuran from 2.1 ns at 77 K to 0.11 ns at 200 K. Efficient single bond rotation in 9 was frozen at -50 °C in a configuration in which the trimethoxyphenyl ring is perpendicular to the fluorinated rings. 7–9 are photostable compounds. The X-ray structure of 7 shows it is not planar and that its conjugation is distorted. Y-enyne 7 stacks in the solid state showing coulombic, actetylene–arene, and fluorine–π interactions.
Numerous recent publications on the psychological meaning of “if” have proposed a probabilistic interpretation of conditional sentences. According to the proponents of probabilistic approaches, sentences like “If the weather is nice, I will be at the beach tomorrow” (or “If p, then q” in the abstract version) express a high probability of the consequent (being at the beach), given the antecedent (nice weather). When people evaluate conditional sentences, they assumingly do so by deriving the conditional probability P(q|p) using a procedure called the Ramsey test. This is a contradicting view to the hitherto dominant Mental Model Theory (MMT, Johnson-Laird, 1983), that proposes conditional sentences refer to possibilities in the world that are represented in form of mental models. Whereas probabilistic approaches gained a lot of momentum in explaining the interpretation of conditionals, there is still no conclusive probabilistic account of conditional reasoning. This thesis investigates the potential of a comprehensive probabilistic account on conditionals that covers the interpretation of conditionals as well as conclusion drawn from these conditionals when used as a premise in an inference task. The first empirical chapter of this thesis, Chapter 2, implements a further investigation of the interpretation of conditionals. A plain version of the Ramsey test as proposed by Evans and Over (2004) was tested against a similarity sensitive version of the Ramsey test (Oberauer, 2006) in two experiments using variants of the probabilistic truth table task (Experiments 2.1 and 2.2). When it comes to decide whether an instance is relevant for the evaluation of a conditional, similarity seems to play a minor role. Once the decision about relevance is made, believability judgments of the conditional seem to be unaffected by the similarity manipulation and judgments are based on frequency of instances, in the way predicted by the plain Ramsey test. In Chapter 3 contradicting predictions of the probabilistic approaches on conditional reasoning of Verschueren et al (2005), Evans and Over (2004) and Oaksford & Chater (2001) are tested against each other. Results from the probabilistic truth table task modified for inference tasks supports the account of Oaksford and Chater (Experiment 3.1). A learning version of the task and a design with every day conditionals yielded results unpredicted by any of the theories (Experiments 3.2-3.4). Based on these results, a new probabilistic 2-stage model of conditional reasoning is proposed. To preclude claims that the use of the probabilistic truth table task (or variants thereof) favors judgments reflecting conditional probabilities, Chapter 4 combines methodologies used by proponents of the MMT with the probabilistic truth table task. In three Experiments (4.1 -4.3) it could be shown for believability judgments of the conditional and inferences drawn from it, that causal information about counterexamples only prevails, when no frequencies of exceptional cases are present. Experiment 4.4 extends these findings to every day conditionals. A probabilistic estimation process based on frequency information is used to explain results on all tasks. The findings confirm with a probabilistic approach on conditionals and moreover constitute an explanatory challenge for the MMT. In conclusion of all the evidence gathered in this dissertation it seems justified to draw the picture of a comprehensive probabilistic view on conditionals quite optimistically. Probability estimates not only explain the believability people assign to a conditional sentence, they also explain to what extend people are willing to draw conclusions from those sentences.
Sucrose synthase (Susy) is a key enzyme of sucrose metabolism, catalysing the reversible conversion of sucrose and UDP to UDP-glucose and fructose. Therefore, its activity, localization and function have been studied in various plant species. It has been shown that Susy can play a role in supplying energy in companion cells for phloem loading (Fu and Park, 1995), provides substrates for starch synthesis (Zrenner et al., 1995), and supplies UDP-glucose for cell wall synthesis (Haigler et al., 2001). Analysis of the Arabidopsis genome identifies six Susy isoforms. The expression of these isoforms was investigated using promoter-reporter gene constructs (GUS) and real time RT-PCR. Although these isoforms are closely related at the protein level they have radically different spatial and temporal patterns of expression in the plant with no two isoforms showing the same distribution. More than one isoform is expressed in all organs examined. Some of them have high but specific expression in particular organs or developmental stages whilst others are constantly expressed throughout the whole plant and across various stages of development. The in planta function of the six Susy isoforms were explored through analysis of T-DNA insertion mutants and RNAi lines. Plants without the expression of individual isoforms show no differences in growth and development, and are not significantly different from wild type plants in soluble sugars, starch and cellulose contents under all growth conditions investigated. Analysis of T-DNA insertion mutant lacking Sus3 isoform that was exclusively expressed in stomata cells only had a minor influence on guard cell osmoregulation and/or bioenergetics. Although none of the sucrose synthases appear to be essential for normal growth under our standard growth conditions, they may be necessary for growth under stress conditions. Different isoforms of sucrose synthase respond differently to various abiotic stresses. It has been shown that oxygen deprivation up regulates Sus1 and Sus4 and increases total Susy activity. However, the analysis of the plants with reduced expression of both Sus1 and Sus4 revealed no obvious effects on plant performance under oxygen deprivation. Low temperature up regulates Sus1 expression but the loss of this isoform has no effect on the freezing tolerance of non acclimated and cold acclimated plants. These data provide a comprehensive overview of the expression of this gene family which supports some of the previously reported roles for Susy and indicates the involvement of specific isoforms in metabolism and/or signalling.
Investigations with frequency domain photon density waves allow elucidation of absorption and scattering properties of turbid media. The temporal and spatial propagation of intensity modulated light with frequencies up to more than 1 GHz can be described by the P1 approximation to the Boltzmann transport equation. In this study, we establish requirements for the appropriate choice of turbid model media and characterize mixtures of isosulfan blue as absorber and polystyrene beads as scatterer. For these model media, the independent determination of absorption and reduced scattering coefficients over large absorber and scatterer concentration ranges is demonstrated with a frequency domain photon density wave spectrometer employing intensity and phase measurements at various modulation frequencies.
Nowadays, colloidal rods can be synthesized in large amounts. The rods are typically cylindrically and their length ranges from several nanometers to a few micrometers. In solution, systems of colloidal rodlike molecules or aggregates can form liquid-crystalline phases with long-range orientational and spatial order. In the present work, we investigate structure formation and fractionation in systems of rodlike colloids with the help of Monte Carlo simulations in the NPT ensemble. Repulsive interactions can successfully be mimicked by the hard rod model, which has been studied extensively in the past. In many cases, attractive interactions like van der Waals or depletion forces cannot be neglected, however. In the first part of this work, the phase behavior of monodisperse attractive rods is characterized for different interaction strengths. Phase diagrams as a function of rod length and pressure are presented. Most systems of synthesized mesoscopic rods have a polydisperse length distribution as a consequence of the longitudinal growth process of the rods. For many technical and research applications, a rather small polydispersity is desired in order to have well defined material properties. The polydispersity can be reduced by a spatial demixing (fractionation) of long and short rods. Fractionation and structure formation is studied in a tridisperse and a polydisperse bulk suspension of rods. We observe that the resulting structures depend distinctly on the interaction strength. The fractionation in the system is strongly enhanced with increasing interaction strength. Suspensions are typically confined in a container. We also examine the influence of adjacent substrates in systems of tridisperse and polydisperse rod suspensions. Three different substrate types are studied in detail: a planar wall, a corrugated substrate, and a substrate with rectangular cavities. We analyze the fluid structure close to the substrate and substrate controlled fractionation. The spatial arrangement of long and short rods in front of the substrate depends sensitively on the substrate structure and the pressure. Rods with a predefined length are segregated at substrates with rectangular cavities.
The spatio-temporal evolution of the three recent tsunamogenic earthquakes (TsE) off-coast N-Sumatra (Mw9.3), 28/03/2005 (Mw8.5) off-coast Nias, on 17/07/2006 (Mw7.7) off-coast Java. Start time, duration, and propagation of the rupture are retrieved. All parameters can be obtained rapidly after recording of the first-arrival phases in near-real time processing. We exploit semblance analysis, backpropagation and broad-band seismograms within 30°-95° distance. Image enhancement is reached by stacking the semblance of arrays within different directions. For the three events, the rupture extends over about 1150, 150, and 200km, respectively. The events in 2004, 2005, and 2006 had source durations of at least 480s, 120s, and 180s, respectively. We observe unilateral rupture propagation for all events except for the rupture onset and the Nias event, where there is evidence for a bilateral start of the rupture. Whereas average rupture speed of the events in 2004 and 2005 is in the order of the S-wave speed (≈2.5-3km/s), unusually slow rupturing (≈1.5 km/s) is indicated for the July 2006 event. For the July 2006 event we find rupturing of a 200 x 100 km wide area in at least 2 phases with propagation from NW to SE. The event has some characteristics of a circular rupture followed by unilateral faulting with change in slip rate. Fault area and aftershock distribution coincide. Spatial and temporal resolution are frequency dependent. Studies of a Mw6.0 earthquake on 2006/09/21 and one synthetic source show a ≈1° limit in resolution. Retrieved source area, source duration as well as peak values for semblance and beam power generally increase with the size of the earthquake making possible an automatic detection and classification of large and small earthquakes.
Interdisciplinary studies on information structure : ISIS ; Working papers of the SFB 632 - Vol. 5
(2006)
In this paper we compare the behaviour of adverbs of frequency (de Swart 1993) like usually with the behaviour of adverbs of quantity like for the most part in sentences that contain plural definites. We show that sentences containing the former type of Q-adverb evidence that Quantificational Variability Effects (Berman 1991) come about as an indirect effect of quantification over situations: in order for quantificational variability readings to arise, these sentences have to obey two newly observed constraints that clearly set them apart from sentences containing corresponding quantificational DPs, and that can plausibly be explained under the assumption that quantification over (the atomic parts of) complex situations is involved. Concerning sentences with the latter type of Q-adverb, on the other hand, such evidence is lacking: with respect to the constraints just mentioned, they behave like sentences that contain corresponding quantificational DPs. We take this as evidence that Q-adverbs like for the most part do not quantify over the atomic parts of sum eventualities in the cases under discussion (as claimed by Nakanishi and Romero (2004)), but rather over the atomic parts of the respective sum individuals.
In this thesis the interplay between hydrodynamic transport and specific adhesion is theoretically investigated. An important biological motivation for this work is the rolling adhesion of white blood cells experimentally investigated in flow chambers. There, specific adhesion is mediated by weak bonds between complementary molecular building blocks which are either located on the cell surface (receptors) or attached to the bottom plate of the flow chamber (ligands). The model system under consideration is a hard sphere covered with receptors moving above a planar ligand-bearing wall. The motion of the sphere is influenced by a simple shear flow, deterministic forces, and Brownian motion. An algorithm is given that allows to numerically simulate this motion as well as the formation and rupture of bonds between receptors and ligands. The presented algorithm spatially resolves receptors and ligands. This opens up the perspective to apply the results also to flow chamber experiments done with patterned substrates based on modern nanotechnological developments. In the first part the influence of flow rate, as well as of the number and geometry of receptors and ligands, on the probability for initial binding is studied. This is done by determining the mean time that elapses until the first encounter between a receptor and a ligand occurs. It turns out that besides the number of receptors, especially the height by which the receptors are elevated above the surface of the sphere plays an important role. These findings are in good agreement with observations of actual biological systems like white blood cells or malaria-infected red blood cells. Then, the influence of bonds which have formed between receptors and ligands, but easily rupture in response to force, on the motion of the sphere is studied. It is demonstrated that different states of motion-for example rolling-can be distinguished. The appearance of these states depending on important model parameters is then systematically investigated. Furthermore, it is shown by which bond property the ability of cells to stably roll in a large range of applied flow rates is increased. Finally, the model is applied to another biological process, the transport of spherical cargo particles by molecular motors. In analogy to the so far described systems molecular motors can be considered as bonds that are able to actively move. In this part of the thesis the mean distance the cargo particles are transported is determined.
The present thesis deals with the mental representation of numbers in space. Generally it is assumed that numbers are mentally represented on a mental number line along which they ordered in a continuous and analogical manner. Dehaene, Bossini and Giraux (1993) found that the mental number line is spatially oriented from left-to-right. Using a parity-judgment task they observed faster left-hand responses for smaller numbers and faster right-hand responses for larger numbers. This effect has been labelled as Spatial Numerical Association of Response Codes (SNARC) effect. The first study of the present thesis deals with the question whether the spatial orientation of the mental number line derives from the writing system participants are adapted to. According to a strong ontogenetic interpretation the SNARC effect should only obtain for effectors closely related to the comprehension and production of written language (hands and eyes). We asked participants to indicate the parity status of digits by pressing a pedal with their left or right foot. In contrast to the strong ontogenetic view we observed a pedal SNARC effect which did not differ from the manual SNARC effect. In the second study we evaluated whether the SNARC effect reflects an association of numbers and extracorporal space or an association of numbers and hands. To do so we varied the spatial arrangement of the response buttons (vertical vs. horizontal) and the instruction (handrelated vs. button-related). For vertically arranged buttons and a buttonrelated instruction we found a button-related SNARC effect. In contrast, for a hand-related instruction we obtained a hand-related SNARC effect. For horizontally arranged buttons and a handrelated instruction, however, we found a buttonrelated SNARC effect. The results of the first to studies were interpreted in terms of weak ontogenetic view. In the third study we aimed to examine the functional locus of the SNARC effect. We used the psychological refractory period paradigm. In the first experiment participants first indicated the pitch of a tone and then the parity status of a digit (locus-of-slack paradigma). In a second experiment the order of stimulus presentation and thus tasks changed (effect-propagation paradigm). The results led us conclude that the SNARC effect arises while the response is centrally selected. In our fourth study we test for an association of numbers and time. We asked participants to compare two serially presented digits. Participants were faster to compare ascending digit pairs (e.g., 2-3) than descending pairs (e.g., 3-2). The pattern of our results was interpreted in terms of forwardassociations (“1-2-3”) as formed by our ubiquitous cognitive routines to count of objects or events.
I perform and analyse the first ever calculations of rotating stellar iron core collapse in {3+1} general relativity that start out with presupernova models from stellar evolutionary calculations and include a microphysical finite-temperature nuclear equation of state, an approximate scheme for electron capture during collapse and neutrino pressure effects. Based on the results of these calculations, I obtain the to-date most realistic estimates for the gravitational wave signal from collapse, bounce and the early postbounce phase of core collapse supernovae. I supplement my {3+1} GR hydrodynamic simulations with 2D Newtonian neutrino radiation-hydrodynamic supernova calculations focussing on (1) the late postbounce gravitational wave emission owing to convective overturn, anisotropic neutrino emission and protoneutron star pulsations, and (2) on the gravitational wave signature of accretion-induced collapse of white dwarfs to neutron stars.
The properties of a series of well-defined new surfactant oligomers (dimers to tetramers)were examined. From a molecular point of view, these oligomeric surfactants consist of simple monomeric cationic surfactant fragments coupled via the hydrophilic ammonium chloride head groups by spacer groups (different in nature and length). Properties of these cationic surfactant oligomers in aqueous solution such as solubility, micellization and surface activity, micellar size and aggregation number were discussed with respect to the two new molecular variables introduced, i.e. degree of oligomerization and spacer group, in order to establish structure – property relationships. Thus, increasing the degree of oligomerization results in a pronounced decrease of the critical micellization concentration (CMC). Both reduced spacer length and increased spacer hydrophobicity lead to a decrease of the CMC, but to a lesser extent. For these particular compounds, the formed micelles are relatively small and their aggregation number decreases with increasing the degree of oligomerization, increasing spacer length and sterical hindrance. In addition, pseudo-phase diagrams were established for the dimeric surfactants in more complex systems, namely inverse microemulsions, demonstrating again the important influence of the spacer group on the surfactant behaviour. Furthermore, the influence of additives on the property profile of the dimeric compounds was examined, in order to see if the solution properties can be improved while using less material. Strong synergistic effects were observed by adding special organic salts (e.g. sodium salicylate, sodium vinyl benzoate, etc.) to the surfactant dimers in stoichiometric amounts. For such mixtures, the critical aggregation concentration is strongly shifted to lower concentration, the effect being more pronounced for dimers than for analogous monomers. A sharp decrease of the surface tension can also be attained. Many of the organic anions produce viscoelastic solutions when added to the relatively short-chain dimers in aqueous solution, as evidenced by rheological measurements. This behaviour reflects the formation of entangled wormlike micelles due to strong interactions of the anions with the cationic surfactants, decreasing the curvature of the micellar aggregates. It is found that the associative behaviour is enhanced by dimerization. For a given counterion, the spacer group may also induce a stronger viscosifying effect depending on its length and hydrophobicity. Oppositely charged surfactants were combined with the cationic dimers, too. First, some mixtures with the conventional anionic surfactant SDS revealed vesicular aggregates in solution. Also, in view of these catanionic mixtures, a novel anionic dimeric surfactant based on EDTA was synthesized and studied. The synthesis route is relatively simple and the compound exhibits particularly appealing properties such as low CMC and σCMC values, good solubilization capacity of hydrophobic probes and high tolerance to hard water. Noteworthy, mixtures with particular cationic dimers gave rise to viscous solutions, reflecting the micelle growth.
Tensile source components of swarm events in West Bohemia in 2000 by considering seismic anisotropy
(2006)
Earthquake swarms occur frequently in West Bohemia, Central Europe. Their occurrence is correlated with and propably triggered by fluids that escape on the earth's surface near the epicentres. These fluids raise up periodically from a seemingbly deep-seated source in the upper mantle. Moment tensors for swarm events in 1997 indicate tensile faulting. However, they were determined under assumption of seismic isotropy although anisotropy can be observed. Anisotropy may obscure moment tensors and their interpretation. In 2000, more than 10,000 swarm earthquakes occurred near Novy Kostel, West Bohemia. Event triggering by fluid injection is likely. Activity lasted from 28/08 until 31/12/00 (9 phases) with maximum ML=3.2. High quality P-wave seismograms were used to retrieve the source mechanisms for 112 events between 28/08/00 and 30/10/00 using > 20 stations. We determine the source geometry using a new algorithm and different velocity models including anisotropy. From inversions of P waves we observe ML<3.2, strike-slip events on steep N-S oriented faults with additional normal or reverse components. Tensile components seem to be evident for more than 60% of the processed swarm events in West Bohemia during the phases 1-7. Being most significant at great depths and at phases 1-4 during the swarm they are time and location dependent. Although tensile components are reduced when anisotropy is assumed they persist and seem to be important. They can be explained by pore-pressure changes due to the injection of fluids that raise up. Our findings agree with other observations e.g. correlation of fluid transport and seismicity, variations in b-value, forcing rate, and in pore pressure diffusion. Tests of our results show their significance.
The Mw=7.7 tsunamogenic earthquake (TsE) on 17 July 2006, 08:19:28 shock the Indian Ocean at about 15 km depth off-coast Java, Indonesia. It caused a local tsunami with wave heights exceeding 2 m. The death toll reached several hundred. Thousands of people were displaced. By means of standard array methods, we have investigated the propagation and the extent of the rupture front of the causative earthquake. Waveform similarity is expressed by means of the semblance. We back-propagate the semblance for first-arrival phases recorded at broad-band stations within teleseismic distances (30°-95°). Image enhancement is realised by stacking the semblance of 8 arrays within different epicentral and azimuthal directions. From teleseismic observations we find rupturing of a 200 x 100 km wide area in at least 2 phases with propagation from NW to SE and source duration >125 s. The event has some characteristics of a circular rupture followed by unilateral faulting with change in slip rate. Unusually slow rupturing (≈1.5 km/s) is indicated. Fault area and aftershock distribution coincide. Spatial and temporal resolution are frequency dependent. Studies of a Mw6.0 earthquake on 2006/09/21 and one synthetic source show a ≈1° limit in resolution. Retrieved source area, source duration as well as peak values for semblance and beam power increase with the size of the earthquake making possible an automatic detection and classification of large and small earthquakes.
The innovation of information techniques has changed many aspects of our life. In health care field, we can obtain, manage and communicate high-quality large volumetric image data by computer integrated devices, to support medical care. In this dissertation I propose several promising methods that could assist physicians in processing, observing and communicating the image data. They are included in my three research aspects: telemedicine integration, medical image visualization and image segmentation. And these methods are also demonstrated by the demo software that I developed. One of my research point focuses on medical information storage standard in telemedicine, for example DICOM, which is the predominant standard for the storage and communication of medical images. I propose a novel 3D image data storage method, which was lacking in current DICOM standard. I also created a mechanism to make use of the non-standard or private DICOM files. In this thesis I present several rendering techniques on medical image visualization to offer different display manners, both 2D and 3D, for example, cut through data volume in arbitrary degree, rendering the surface shell of the data, and rendering the semi-transparent volume of the data. A hybrid segmentation approach, designed for semi-automated segmentation of radiological image, such as CT, MRI, etc, is proposed in this thesis to get the organ or interested area from the image. This approach takes advantage of the region-based method and boundary-based methods. Three steps compose the hybrid approach: the first step gets coarse segmentation by fuzzy affinity and generates homogeneity operator; the second step divides the image by Voronoi Diagram and reclassifies the regions by the operator to refine segmentation from the previous step; the third step handles vague boundary by level set model. Topics for future research are mentioned in the end, including new supplement for DICOM standard for segmentation information storage, visualization of multimodal image information, and improvement of the segmentation approach to higher dimension.
Three quantum cryptographic protocols of multiuser quantum networks with embedded authentication, allowing quantum key distribution or quantum direct communication, are discussed in this work. The security of the protocols against different types of attacks is analysed with a focus on various impersonation attacks and the man-in-the-middle attack. On the basis of the security analyses several improvements are suggested and implemented in order to adjust the investigated vulnerabilities. Furthermore, the impact of the eavesdropping test procedure on impersonation attacks is outlined. The framework of a general eavesdropping test is proposed to provide additional protection against security risks in impersonation attacks.
Förster Resonance Energy Transfer (FRET) plays an important role for biochemical applications such as DNA sequencing, intracellular protein-protein interactions, molecular binding studies, in vitro diagnostics and many others. For qualitative and quantitative analysis, FRET systems are usually assembled through molecular recognition of biomolecules conjugated with donor and acceptor luminophores. Lanthanide (Ln) complexes, as well as semiconductor quantum dot nanocrystals (QD), possess unique photophysical properties that make them especially suitable for applied FRET. In this work the possibility of using QD as very efficient FRET acceptors in combination with Ln complexes as donors in biochemical systems is demonstrated. The necessary theoretical and practical background of FRET, Ln complexes, QD and the applied biochemical models is outlined. In addition, scientific as well as commercial applications are presented. FRET can be used to measure structural changes or dynamics at distances ranging from approximately 1 to 10 nm. The very strong and well characterized binding process between streptavidin (Strep) and biotin (Biot) is used as a biomolecular model system. A FRET system is established by Strep conjugation with the Ln complexes and QD biotinylation. Three Ln complexes (one with Tb3+ and two with Eu3+ as central ion) are used as FRET donors. Besides the QD two further acceptors, the luminescent crosslinked protein allophycocyanin (APC) and a commercial fluorescence dye (DY633), are investigated for direct comparison. FRET is demonstrated for all donor-acceptor pairs by acceptor emission sensitization and a more than 1000-fold increase of the luminescence decay time in the case of QD reaching the hundred microsecond regime. Detailed photophysical characterization of donors and acceptors permits analysis of the bioconjugates and calculation of the FRET parameters. Extremely large Förster radii of more than 100 Å are achieved for QD as acceptors, considerably larger than for APC and DY633 (ca. 80 and 60 Å). Special attention is paid to interactions with different additives in aqueous solutions, namely borate buffer, bovine serum albumin (BSA), sodium azide and potassium fluoride (KF). A more than 10-fold limit of detection (LOD) decrease compared to the extensively characterized and frequently used donor-acceptor pair of Europium tris(bipyridine) (Eu-TBP) and APC is demonstrated for the FRET system, consisting of the Tb complex and QD. A sub-picomolar LOD for QD is achieved with this system in azide free borate buffer (pH 8.3) containing 2 % BSA and 0.5 M KF. In order to transfer the Strep-Biot model system to a real-life in vitro diagnostic application, two kinds of imunnoassays are investigated using human chorionic gonadotropin (HCG) as analyte. HCG itself, as well as two monoclonal anti-HCG mouse-IgG (immunoglobulin G) antibodies are labeled with the Tb complex and QD, respectively. Although no sufficient evidence for FRET can be found for a sandwich assay, FRET becomes obvious in a direct HCG-IgG assay showing the feasibility of using the Ln-QD donor-acceptor pair as highly sensitive analytical tool for in vitro diagnostics.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
The terrestrial biosphere impacts considerably on the global carbon cycle. In particular, ecosystems contribute to set off anthropogenic induced fossil fuel emissions and hence decelerate the rise of the atmospheric CO₂ concentration. However, the future net sink strength of an ecosystem will heavily depend on the response of the individual processes to a changing climate. Understanding the makeup of these processes and their interaction with the environment is, therefore, of major importance to develop long-term climate mitigation strategies. Mathematical models are used to predict the fate of carbon in the soil-plant-atmosphere system under changing environmental conditions. However, the underlying processes giving rise to the net carbon balance of an ecosystem are complex and not entirely understood at the canopy level. Therefore, carbon exchange models are characterised by considerable uncertainty rendering the model-based prediction into the future prone to error. Observations of the carbon exchange at the canopy scale can help learning about the dominant processes and hence contribute to reduce the uncertainty associated with model-based predictions. For this reason, a global network of measurement sites has been established that provides long-term observations of the CO₂ exchange between a canopy and the atmosphere along with micrometeorological conditions. These time series, however, suffer from observation uncertainty that, if not characterised, limits their use in ecosystem studies. The general objective of this work is to develop a modelling methodology that synthesises physical process understanding with the information content in canopy scale data as an attempt to overcome the limitations in both carbon exchange models and observations. Similar hybrid modelling approaches have been successfully applied for signal extraction out of noisy time series in environmental engineering. Here, simple process descriptions are used to identify relationships between the carbon exchange and environmental drivers from noisy data. The functional form of these relationships are not prescribed a priori but rather determined directly from the data, ensuring the model complexity to be commensurate with the observations. Therefore, this data-led analysis results in the identification of the processes dominating carbon exchange at the ecosystem scale as reflected in the data. The description of these processes may then lead to robust carbon exchange models that contribute to a faithful prediction of the ecosystem carbon balance. This work presents a number of studies that make use of the developed data-led modelling approach for the analysis and interpretation of net canopy CO₂ flux observations. Given the limited knowledge about the underlying real system, the evaluation of the derived models with synthetic canopy exchange data is introduced as a standard procedure prior to any real data employment. The derived data-led models prove successful in several different applications. First, the data-based nature of the presented methods makes them particularly useful for replacing missing data in the observed time series. The resulting interpolated CO₂ flux observation series can then be analysed with dynamic modelling techniques, or integrated to coarser temporal resolution series for further use e.g., in model evaluation exercises. However, the noise component in these observations interferes with deterministic flux integration in particular when long time periods are considered. Therefore, a method to characterise the uncertainties in the flux observations that uses a semi-parametric stochastic model is introduced in a second study. As a result, an (uncertain) estimate of the annual net carbon exchange of the observed ecosystem can be inferred directly from a statistically consistent integration of the noisy data. For the forest measurement sites analysed, the relative uncertainty for the annual sum did not exceed 11 percent highlighting the value of the data. Based on the same models, a disaggregation of the net CO₂ flux into carbon assimilation and respiration is presented in a third study that allows for the estimation of annual ecosystem carbon uptake and release. These two components can then be further analysed for their separate response to environmental conditions. Finally, a fourth study demonstrates how the results from data-led analyses can be turned into a simple parametric model that is able to predict the carbon exchange of forest ecosystems. Given the global network of measurements available the derived model can now be tested for generality and transferability to other biomes. In summary, this work particularly highlights the potential of the presented data-led methodologies to identify and describe dominant carbon exchange processes at the canopy level contributing to a better understanding of ecosystem functioning.
Polyelectrolyte microcapsules containing stimuli-responsive polymers have potential applications in the fields of sensors or actuators, stimulable microcontainers and controlled drug delivery. Such capsules were prepared, with the focus on pH-sensitivity and carbohydrate-sensing. First, pH-responsive polyelectrolyte capsules were produced by means of electrostatic layer-by-layer assembly of oppositely charged weak polyelectrolytes onto colloidal templates that were subsequently removed. The capsules were composed of poly(allylamine hydrochloride) (PAH) and poly(methacrylic acid) (PMA) or poly(4-vinylpyridine) (P4VP) and PMA and varied considerably in their hydrophobicity and the influence of secondary interactions. These polymers were assembled onto CaCO3 and SiO2 particles with diameters of ~ 5 µm, and a new method for the removal of the silica template under mild conditions was proposed. The pH-dependent stability of PAH/PMA and P4VP/PMA capsules was studied by confocal laser scanning microscopy (CLSM). They were stable over a wide pH-range and exhibited a pronounced swelling at the edges of stability, which was attributed to uncompensated positive or negative charges within the multilayers. The swollen state could be stabilized when the electrostatic repulsion was counteracted by hydrogen-bonding, hydrophobic interactions or polymeric entanglement. This stabilization made it possible to reversibly swell and shrink the capsules by tuning the pH of the solution. The pH-dependent ionization degree of PMA was used to modulate the binding of calcium ions. In addition to the pH-sensitivity, the stability and the swelling degree of these capsules at a given pH could be modified, when the ionic strength of the medium was altered. The reversible swelling was accompanied by reversible permeability changes for low and high molecular weight substances. The permeability for glucose was evaluated by studying the time-dependence of the buckling of the capsule walls in glucose solutions and the reversible permeability modulation was used for the encapsulation of polymeric material. A theoretical model was proposed to explain the pH-dependent size variations that took into account an osmotic expanding force and an elastic restoring force to evaluate the pH-dependent size changes of weak polyelectrolyte capsules. Second, sugar-sensitive multilayers were assembled using the reversible covalent ester formation between the polysaccharide mannan and phenylboronic acid moieties that were grafted onto poly(acrylic acid) (PAA). The resulting multilayer films were sensitive to several carbohydrates, showing the highest sensitivity to fructose. The response to carbohydrates resulted from the competitive binding of small molecular weight sugars and mannan to the boronic acid groups within the film, and was observed as a fast dissolution of the multilayers, when they were brought into contact with the sugar-containing solution above a critical concentration. It was also possible to prepare carbohydrate-sensitive multilayer capsules, and their sugar-dependent stability was investigated by following the release of encapsulated rhodamine-labeled bovine serum albumin (TRITC-BSA).
Germination rates and germination fractions of seeds can be predicted well by the hydrothermal time (HTT) model. Its four parameters hydrothermal time, minimum soil temperature, minimum soil moisture, and variation of minimum soil moisture, however, must be determined by lengthy germination experiments at combinations of several levels of soil temperature and moisture. For some applications of the HTT model it is more important to have approximate estimates for many species rather than exact values for only a few species. We suggest that minimum temperature and variation of minimum moisture can be estimated from literature data and expert knowledge. This allows to derive hydrothermal time and minimum moisture from existing data from germination experiments with one level of temperature and moisture. We applied our approach to a germination experiment comparing germination fractions of wild annual species along an aridity gradient in Israel. Using this simplified approach we estimated hydrothermal time and minimum moisture of 36 species. Comparison with exact data for three species shows that our method is a simple but effective method for obtaining parameters for the HTT model. Hydrothermal time and minimum moisture supposedly indicate climate related germination strategies. We tested whether these two parameters varied with the climate at the site where the seeds had been collected. We found no consistent variation with climate across species, suggesting that variation is more strongly controlled by site-specific factors.
First studies of electron transfer in [N]phenylenes were performed in bimolecular quenching reactions of angular [3]- and triangular [4]phenylene with various electron acceptors. The relation between the quenching rate constants kq and the free energy change of the electron transfer (ΔG0CS ) could be described by the Rehm-Weller equation. From the experimental results, a reorganization energy λ of 0.7 eV was derived. Intramolecular electron transfer reactions were studied in an [N]phenylene bichomophore and a corresponding reference compound. Fluorescence lifetime and quantum yield of the bichromophor display a characteristic dependence on the solvent polarity, whereas the corresponding values of the reference compound remain constant. From the results, a nearly isoenergonic ΔG0CS can be determined. As the triplet quantum yield is nearly independent of the polarity, charge recombination leads to the population of the triplet state.
Contents: Chapter 1. Introduction 1 Information Structure 2 Grammatical Correlates of Information Structure 3 Structure of the Questionnaire 4 Experimental Tasks 5 Technicalities 6 Archiving 7 Acknowledgments Chapter 2. General Questions 1 General Information 2 Phonology 3 Morphology and Syntax Chapter 3. Experimental tasks 1 Changes (Given/New in Intransitives and Transitives) 2 Giving (Given/New in Ditransitives) 3 Visibility (Given/New, Animacy and Type/Token Reference) 4 Locations (Given/New in Locative Expressions) 5 Sequences (Given/New/Contrast in Transitives) 6 Dynamic Localization (Given/New in Dynamic Loc. Descriptions) 7 Birthday Party (Weight and Discourse Status) 8 Static Localization (Macro-Planning and Given/New in Locatives) 9 Guiding (Presentational Utterances) 10 Event Cards (All New) 11 Anima (Focus types and Animacy) 12 Contrast (Contrast in pairing events) 13 Animal Game (Broad/Narrow Focus in NP) 14 Properties (Focus on Property and Possessor) 15 Eventives (Thetic and Categorical Utterances) 16 Tell a Story (Contrast in Text) 17 Focus Cards (Selective, Restrictive, Additive, Rejective Focus) 18 Who does What (Answers to Multiple Constituent Questions) 19 Fairy Tale (Topic and Focus in Coherent Discourse) 20 Map Task (Contrastive and Selective Focus in Spontaneous Dialogue) 21 Drama (Contrastive Focus in Argumentation) 22 Events in Places (Spatial, Temporal and Complex Topics) 23 Path Descriptions (Topic Change in Narrative) 24 Groups (Partial Topic) 25 Connections (Bridging Topic) 26 Indirect (Implicational Topic) 27 Surprises (Subject-Topic Interrelation) 28 Doing (Action Given, Action Topic) 29 Influences (Question Priming) Chapter 4. Translation tasks 1 Basic Intonational Properties 2 Focus Translation 3 Topic Translation 4 Quantifiers Chapter 5. Information structure summary survey 1 Preliminaries 2 Syntax 3 Morphology 4 Prosody 5 Summary: Information structure Chapter 6. Performance of Experimental Tasks in the Field 1 Field sessions 2 Field Session Metadata 3 Informants’ Agreement
When top sports performers fail or “choke” under pressure, everyone asks: why? Research has identified a number of conditions (e.g. an audience) that elicit choking and that moderate (e.g. trait-anxiety) pressure – performance relation. Furthermore, mediating processes have been investigated. For example, explicit monitoring theories link performance failure under psychological stress to an increase in attention paid to a skill and its step-by-step execution (Beilock & Carr, 2001). Many studies have provided support for these ideas. However, so far only overt performance measures have been investigated which do not allow more thorough analyses of processes or performance strategies. But also a theoretical framework has been missing, that could (a) explain the effects of explicit monitoring on skill execution and that (b) makes predictions as to what is being monitored during execution. Consequently in this study, the nodalpoint hypothesis of motor control (Hossner & Ehrlenspiel, 2006) was taken to predict movement changes on three levels of analysis at certain “nodalpoints” within the movement sequence. Performance in two different laboratory tasks was assessed with respect to overt performance (the observable result, for example accuracy in the target), covert performance (description of movement execution, for example the acceleration of body segements) and task exploitation (the utilization of task properties such as covariation). A fake competition (see Beilock & Carr, 2002) was used to invoke pressure. In study 1 a ball bouncing task in a virtual-reality set-up was chosen. Previous studies (de Rugy, Wei, Müller, & Sternad, 2003) have shown that learners are usually able to “passively” exploit the dynamical stability of the system. According to explicit monitoring theories, choking should be expected either if the task itself evokes an “active control” (Experiment 1) or if learners are provided with explicit instructions (Experiment 2). In both experiments, participants first went through a practice phase on day 1. On day 2, following the Baseline Test participants were divided into a High-Stress or No-Stress Group for the final Performance Test. The High-Stress Group entered a fake competition. Overt performance was measured by the Absolute Error (AE) of ball amplitudes from target height; covert performance was measured by Period Modulation between successive hits and task exploitation was measured by Acceleration (AC) at ball-racket impact and Covariation (COV) of impact parameters. To evoke active control in Exp. 1 (N=20), perturbations to the ball flight were introduced. In Exp. 2 (N=39) half of the participants received explicit skill-focused instructions during learning. For overt performance, results generally show an interaction between Stress Group and Test, with better performance (i. e. lower AE) for the High-Stress group in the final Performance Test. This effect is also independent of the Instructions that participants had received during learning (Exp. 2). Similar effects were found for COV but not for AC. In study 2 a visuomotor tracking task in which participants had to pursuit a target cross that was moving on an invisible curve. This curve consisted of 3 segments of 6 turning points sequentially ordered around the x-axis. Participants learned two short movement sequences which were then concatenated to form a single sequence. It was expected that under pressure, this sequence should “fall apart” at the point of concatenation. Overt Performance was assessed by the Root Mean Square Error between target and pursuit cross as well as the Absolute Error at the turning points, covert performance was measured by the Latency from target to pursuit turning and task exploitation was measured by the temporal covariation between successive intervals between turning points. Experiment 3 (intraindividual variation) as well as Experiment 4 (interindividual variation) show performance enhancement in the pressure situation on the overt level with matching results on covert and task exploitation level. Thus, contrary to previous studies, no choking under pressure was found in any of the experiments. This may be interpreted as a failure in the experimental manipulation. But certainly also important characteristics of the task are highlighted. Choking should occur in tasks where performers do not have the time to use action or thought control strategies, that are more relevant to their “self” and that are discrete in nature.
Contents: Introduction Experimental Techniques: The LIF demonstrator unit - The LIF demonstrator unit - The mobile LIF spectrometer OPTIMOS - Investigated petroleum products and soil samples Results and Discussion: Photophysical properties of the petroleum products LIF spectroscopic investigations of oil-spiked samples LIF spectroscopic investigations of real-world soils Conclusions
The fluorescence properties and the fluorescence quenching by Tb3+ of substituted benzoic acid were investigated in solution at different pH. The substituted benzoic acids were used as simple model compounds for chromophores present in humic substances (HS). It is shown that the fluorescence properties of the model compounds resemble fluorescence of HS quite well. A major factor determining the fluorescence of model compounds are proton transfer reactions in the electronically excited state. It is intriguing that the fluorescence of the model compounds was almost not quenched by Tb3+ while the HS fluorescence was decreased very effectively. From our results we concluded that proton transfer reactions as well as conformational reorientation processes play an important role in the fluorescence of HS. The luminescence of bound Tb3+ was sensitized by an energy transfer step upon excitation of the model compounds and of HS, respectively. For HS the observed sensitization was dependent on its origin indicating differences 1) in the connection between chromophores and binding sites and 2) in the energy levels of the chromophore triplet states. Hence, the observed sensitization of the Tb3+ luminescence could be useful to characterize structural differences of HS in solution. Interlanthanide energy transfer between Tb3+ and Nd3+ was used to determine the average distance R between both ions using the well-known formalism of luminescence resonance energy transfer. R was dependent on the origin of the HS reflecting the difference in structure. The value of Rmin seemed to be a unique feature of the HS. It was further found that upon variation of the pH R also changed. This demonstrates that the measurement of interlanthanide energy transfer can be used as a direct method to monitor conformational changes in HS.
Optical methods play an important role in process analytical technologies (PAT). Four examples of optical process and quality sensing (OPQS) are presented, which are based on three important experimental techniques: near-infrared absorption, luminescence quenching, and a novel method, photon density wave (PDW) spectroscopy. These are used to evaluate four process and quality parameters related to beer brewing and polyurethane (PU) foaming processes: the ethanol content and the oxygen (O2) content in beer, the biomass in a bioreactor, and the cellular structures of PU foam produced in a pilot production plant.
The Andean orogen is the most outstanding example of mountain building caused by the subduction of oceanic below continental lithosphere. The Andes formed by the subduction of the Nazca and Antarctic oceanic plates under the South American continent over at least ~200 million years. Tectonic and climatic conditions vary markedly along this north-south–oriented plate boundary, which thus represents an ideal natural laboratory to study tectonic and climatic segmentation processes and their possible feedbacks. Most of the seismic energy on Earth is released by earthquakes in subduction zones, like the giant 1960, Mw 9.5 event in south-central Chile. However, the segmentation mechanisms of surface deformation during and between these giant events have remained poorly understood. The Andean margin is a key area to study seismotectonic processes because of its along-strike variability under similar plate kinematic boundary conditions. Active deformation has been widely studied in the central part of the Andes, but the south-central sector of the orogen has gathered less research efforts. This study focuses on tectonics at the Neogene and late Quaternary time scales in the Main Cordillera and coastal forearc of the south-central Andes. For both domains I document the existence of previously unrecognized active faults and present estimates of deformation rates and fault kinematics. Furthermore these data are correlated to address fundamental mountain building processes like strain partitioning and large-scale segmentation. In the Main Cordillera domain and at the Neogene timescale, I integrate structural and stratigraphic field observations with published isotopic ages to propose four main phases of coupled styles of tectonics and distribution of volcanism and magmatism. These phases can be related to the geometry and kinematics of plate convergence. At the late Pleistocene timescale, I integrate field observations with lake seismic and bathymetric profiles from the Lago Laja region, located near the Andean drainage divide. These data reveal Holocene extensional faults, which define the Lago Laja fault system. This fault system has no significant strike-slip component, contrasting with the Liquiñe-Ofqui dextral intra-arc system to the south, where Holocene strike-slip markers are ubiquitous. This contrast in structural style along the arc is coincident with a marked change in along-strike fault geometries in the forearc, across the Arauco Peninsula. Thereon I propose that a net gradient in the degree of partitioning of oblique subduction occurs across the Arauco transition zone. To the north, the margin parallel component of oblique convergence is distributed in a wide zone of diffuse deformation, while to the south it is partitioned along an intra-arc, margin-parallel strike-slip fault zone. In the coastal forearc domain and at the Neogene timescale, I integrate structural and stratigraphic data from field observations, industry reflection-seismic profiles and boreholes to emphasize the influence of climate-driven filling of the trench on the mechanics and kinematics of the margin. I show that forearc basins in the 34-45°S segment record Eocene to early Pliocene extension and subsidence followed by ongoing uplift and contraction since the late Pliocene. I interpret the first stage as caused by tectonic erosion due to high plate convergence rates and reduced trench fill. The subsequent stage, in turn, is related to accretion caused by low convergence rates and the rapid increase in trench fill after the onset of Patagonian glaciations and climate-driven exhumation at ~6-5 Ma. On the late Quaternary timescale, I integrate off-shore seismic profiles with the distribution of deformed marine terraces from Isla Santa María, dated by the radiocarbon method, to show that inverted reverse faulting controls the coastal geomorphology and segmentation of surface deformation. There, a cluster of microearthquakes illuminates one of these reverse faults, which presumingly reaches the plate interface. Furthermore, I use accounts of coseismic uplift during the 1835 M>8 earthquake made by Charles Darwin, to propose that this active reverse fault has been mechanically coupled to the megathrust. This has important implications on the assessment of seismic hazards in this, and other similar regions. These results underscore the need to study plate-boundary deformation processes at various temporal and spatial scales and to integrate geomorphologic, structural, stratigraphic, and geophysical data sets in order to understand the present distribution and causes of tectonic segmentation.
Answer Set Programming (ASP) emerged in the late 1990s as a new logic programming paradigm, having its roots in nonmonotonic reasoning, deductive databases, and logic programming with negation as failure. The basic idea of ASP is to represent a computational problem as a logic program whose answer sets correspond to solutions, and then to use an answer set solver for finding answer sets of the program. ASP is particularly suited for solving NP-complete search problems. Among these, we find applications to product configuration, diagnosis, and graph-theoretical problems, e.g. finding Hamiltonian cycles. On different lines of ASP research, many extensions of the basic formalism have been proposed. The most intensively studied one is the modelling of preferences in ASP. They constitute a natural and effective way of selecting preferred solutions among a plethora of solutions for a problem. For example, preferences have been successfully used for timetabling, auctioning, and product configuration. In this thesis, we concentrate on preferences within answer set programming. Among several formalisms and semantics for preference handling in ASP, we concentrate on ordered logic programs with the underlying D-, W-, and B-semantics. In this setting, preferences are defined among rules of a logic program. They select preferred answer sets among (standard) answer sets of the underlying logic program. Up to now, those preferred answer sets have been computed either via a compilation method or by meta-interpretation. Hence, the question comes up, whether and how preferences can be integrated into an existing ASP solver. To solve this question, we develop an operational graph-based framework for the computation of answer sets of logic programs. Then, we integrate preferences into this operational approach. We empirically observe that our integrative approach performs in most cases better than the compilation method or meta-interpretation. Another research issue in ASP are optimization methods that remove redundancies, as also found in database query optimizers. For these purposes, the rather recently suggested notion of strong equivalence for ASP can be used. If a program is strongly equivalent to a subprogram of itself, then one can always use the subprogram instead of the original program, a technique which serves as an effective optimization method. Up to now, strong equivalence has not been considered for logic programs with preferences. In this thesis, we tackle this issue and generalize the notion of strong equivalence to ordered logic programs. We give necessary and sufficient conditions for the strong equivalence of two ordered logic programs. Furthermore, we provide program transformations for ordered logic programs and show in how far preferences can be simplified. Finally, we present two new applications for preferences within answer set programming. First, we define new procedures for group decision making, which we apply to the problem of scheduling a group meeting. As a second new application, we reconstruct a linguistic problem appearing in German dialects within ASP. Regarding linguistic studies, there is an ongoing debate about how unique the rule systems of language are in human cognition. The reconstruction of grammatical regularities with tools from computer science has consequences for this debate: if grammars can be modelled this way, then they share core properties with other non-linguistic rule systems.
The salivary glands of the blowfly were injected with luminescent oxygen-sensitive microbeads. The changes in oxygen content within individual gland tubules during hormone-induced secretory activity were quantified. The measurements are based on an upgraded phase-modulation technique, where the phase shift of the sensor phosphorescence is determined independently from concentration and background signals. We show that the combination of a lock-in amplifier with a fluorescence microscope results in a convenient setup to measure oxygen concentrations within living animal tissues at the cellular level.
Quantum dots (QDs) are common as luminescing markers for imaging in biological applications because their optical properties seem to be inert against their surrounding solvent. This, together with broad and strong absorption bands and intense, sharp tuneable luminescence bands, makes them interesting candidates for methods utilizing Forster Resonance Energy Transfer (FRET), e. g. for sensitive homogeneous fluoroimmunoassays (FIA). In this work we demonstrate energy transfer from Eu3+-trisbipyridin (Eu-TBP) donors to CdSe-ZnS-QD acceptors in solutions with and without serum. The QDs are commercially available CdSe-ZnS core-shell particles emitting at 655 nm (QD655). The FRET system was achieved by the binding of the streptavidin conjugated donors with the biotin conjugated acceptors. After excitation of Eu-TBP and as result of the energy transfer, the luminescence of the QD655 acceptors also showed lengthened decay times like the donors. The energy transfer efficiency, as calculated from the decay times of the bound and the unbound components, amounted to 37%. The Forster-radius, estimated from the absorption and emission bands, was ca. 77Å. The effective binding ratio, which not only depends on the ratio of binding pairs but also on unspecific binding, was obtained from the donor emission dependent on the concentration. As serum promotes unspecific binding, the overall FRET efficiency of the assay was reduced. We conclude that QDs are good substitutes for acceptors in FRET if combined with slow decay donors like Europium. The investigation of the influence of the serum provides guidance towards improving binding properties of QD assays.
To determine whether Förster resonance energy transfer (FRET) measurements can provide quantitative distance information in single-molecule fluorescence experiments on polypeptides, we measured FRET efficiency distributions for donor and acceptor dyes attached to the ends of freely diffusing polyproline molecules of various lengths. The observed mean FRET efficiencies agree with those determined from ensemble lifetime measurements but differ considerably from the values expected from Förster theory, with polyproline treated as a rigid rod. At donor–acceptor distances much less than the Förster radius R0, the observed efficiencies are lower than predicted, whereas at distances comparable to and greater than R0, they are much higher. Two possible contributions to the former are incomplete orientational averaging during the donor lifetime and, because of the large size of the dyes, breakdown of the point-dipole approximation assumed in Förster theory. End-to-end distance distributions and correlation times obtained from Langevin molecular dynamics simulations suggest that the differences for the longer polyproline peptides can be explained by chain bending, which considerably shortens the donor–acceptor distances.
A technique has been developed to measure absolute intracellular oxygen concentrations in green plants. Oxygen-sensitive phosphorescent microbeads were injected into the cells and an optical multifrequency phase-modulation technique was used to discriminate the sensor signal from the strong autofluorescence of the plant tissue. The method was established using photosynthesis-competent cells of the giant algae Chara corallina L., and was validated by application to various cell types of other plant species.
Absorption and fluorescence properties of 4 hydraulic oils (3 biological and 1 petroleum-based) were investigated. In-situ LIF (laser-induced fluorescence) analysis of the oils on a brown sandy loam soil was performed. With calibration, quantitative detection was achieved. Estimated limits of detection were below ca. 500 mg/kg for the petroleum-based oil and ca. 2000 mg/kg for one biological oil. A semi-quantitative classification scheme is proposed for monitoring of the biological oils. This approach was applied to investigate the migration of a biological oil in soil-containing compartments, namely a soil column and a soil bed.
Results of an inter-laboratory round-robin study of the application of time-resolved emission spectroscopy (TRES) to the speciation of uranium(VI) in aqueous media are presented. The round-robin study involved 13 independent laboratories, using various instrumentation and data analysis methods. Samples were prepared based on appropriate speciation diagrams and, in general, were found to be chemically stable for at least six months. Four different types of aqueous uranyl solutions were studied: (1) acidic medium where UO22+aq is the single emitting species, (2) uranyl in the presence of fluoride ions, (3) uranyl in the presence of sulfate ions, and (4) uranyl in aqueous solutions at different pH, promoting the formation of hydrolyzed species. Results between the laboratories are compared in terms of the number of decay components, luminescence lifetimes, and spectral band positions. The successes and limitations of TRES in uranyl analysis and speciation in aqueous solutions are discussed.
Steady-state and time-resolved fluorescence methods were applied to investigate the fluorescence properties of humic substances of different origins. Using standard 2D emission and total luminescence spectra, fluorescence maxima, the width of the fluorescence band and a relative fluorescence quantum efficiency were determined. Different trends for fulvic acids and humic acids were observed indicating differences in the heterogeneity of the sample fractions. The complexity of the fluorescence decay of humic substances is discussed and compared to simple model compounds. The effect of oxidation of humic substances on their fluorescence properties is discussed as well.
Our work goes in two directions. At first we want to transfer definitions, concepts and results of the theory of hyperidentities and solid varieties from the total to the partial case. (1) We prove that the operators chi^A_RNF and chi^E_RNF are only monotone and additive and we show that the sets of all fixed points of these operators are characterized only by three instead of four equivalent conditions for the case of closure operators. (2) We prove that V is n − SF-solid iff clone^SF V is free with respect to itself, freely generated by the independent set {[fi(x_1, . . . , x_n)]Id^SF_n V | i \in I}. (3) We prove that if V is n-fluid and ~V |P(V ) =~V −iso |P(V ) then V is kunsolid for k >= n (where P(V ) is the set of all V -proper hypersubstitutions of type \tau ). (4) We prove that a strong M-hyperquasi-equational theory is characterized by four equivalent conditions. The second direction of our work is to follow ideas which are typical for the partial case. (1) We characterize all minimal partial clones which are strongly solidifyable. (2)We define the operator Chi^A_Ph where Ph is a monoid of regular partial hypersubstitutions.Using this concept, we define the concept of a Phyp_R(\tau )-solid strong regular variety of partial algebras and we prove that a PHyp_R(\tau )-solid strong regular variety satisfies four equivalent conditions.
This paper investigates the formation of the ownership structure and the corporate governance system of the Ukraine as a country in transition. Numerous studies consider that privatization results in the establishment of a proprietors’ motivation mechanism. On the other hand it causes ownership concentration in the hands of a few shareholders and managers. The goal of economic reform in transition and, largely, its pace, is measured by the degree to which shareholders participate in short- and long-term corporate value creation. Shareholder access to such created value depends on the ability of corporate “insiders”, especially executives and management, to claim a disproportionate share of corporate value (the “insider effect”). An econometric analysis of the correlation between privatization and macroeconomic factors studies the degree of effectiveness of economic reforming in Ukrainian regions.
This paper tries to apply common methods to estimate unbiased coefficients for the return to schooling in Germany for the year 2004. Based on the simple Mincer-type wage equation, the return to schooling is around 9.5% per year. There is no sheepskin effect. As expected the return in the private sector is higher than in the public sector. Females have a higher return than males, but there are no differences between East and West Germans. An Instrumental Variables and a 3-Stage-Least-Squares approach give very high returns. For correcting the sample selection, the Heckman Two Step Procedure and the Heckman Maximum Likelihood Approach are used. For both methods, the coefficients are very similar, but higher than without correcting for it.
This paper presents in the first section a methodological introduction concerning statistics of consumer prices in Georgia. The second section gives a general idea of the development of consumer prices from January 1994 till September 1999. A detailed regional analysis is added in section 3. The fourth section analyses the development of consumer prices for the eight main groups included in the total CPI. Section 5 compares the changes in Georgian CPI with the movements of foreign exchange rates in Georgian Lari. This paper ends with a summary including a short outlook to the next years.
The attractiveness of foreign direct investment in Russia and Ukraine : a statistical analysis
(1999)
In this paper a comparative exploration of the potential for foreign investment and real inflow to Russia and Ukraine are examined. The analysis showed that primarily both countries enjoyed significant comparative advantages in attracting foreign capital. Since the foundation of independent states in 1992 attractiveness began to diverge dramatically. This difference is clearly explained by the determination of the Russian government to reform the economy earlier than the Ukrainian government. The transition to a market economy is closely connected with the development of a favorable investment climate in both countries. It includes the foundation of a stable system of property rights and a conducive legal environment.
The formation of colloids by the controlled reduction, nucleation, and growth of inorganic precursor salts in different media has been investigated for more than a century. Recently, the preparation of ultrafine particles has received much attention since they can offer highly promising and novel options for a wide range of technical applications (nanotechnology, electrooptical devices, pharmaceutics, etc). The interest derives from the well-known fact that properties of advanced materials are critically dependent on the microstructure of the sample. Control of size, size distribution and morphology of the individual grains or crystallites is of the utmost importance in order to obtain the material characteristics desired. Several methods can be employed for the synthesis of nanoparticles. On the one hand, the reduction can occur in diluted aqueous or alcoholic solutions. On the other hand, the reduction process can be realized in a template phase, e.g. in well-defined microemulsion droplets. However, the stability of the nanoparticles formed mainly depends on their surface charge and it can be influenced with some added protective components. Quite different types of polymers, including polyelectrolytes and amphiphilic block copolymers, can for instance be used as protecting agents. The reduction and stabilization of metal colloids in aqueous solution by adding self-synthesized hydrophobically modified polyelectrolytes were studied in much more details. The polymers used are hydrophobically modified derivatives of poly(sodium acrylate) and of maleamic acid copolymers as well as the commercially available branched poly(ethyleneimine). The first notable result is that the polyelectrolytes used can act alone as both reducing and stabilizing agent for the preparation of gold nanoparticles. The investigation was then focused on the influence of the hydrophobic substitution of the polymer backbone on the reduction and stabilization processes. First of all, the polymers were added at room temperature and the reduction process was investigated over a longer time period (up to 8 days). In comparison, the reduction process was realized faster at higher temperature, i.e. 100°C. In both cases metal nanoparticles of colloidal dimensions can be produced. However, the size and shape of the individual nanoparticles mainly depends on the polymer added and the temperature procedure used. In a second part, the influence of the prior mentioned polyelectrolytes was investigated on the phase behaviour as well as on the properties of the inverse micellar region (L2 phase) of quaternary systems consisting of a surfactant, toluene-pentanol (1:1) and water. The majority of the present work has been made with the anionic surfactant sodium dodecylsulfate (SDS) and the cationic surfactant cetyltrimethylammonium bromide (CTAB) since they can interact with the oppositely charged polyelectrolytes and the microemulsions formed using these surfactants present a large water-in-oil region. Subsequently, the polymer-modified microemulsions were used as new templates for the synthesis of inorganic particles, ranging from metals to complex crystallites, of very small size. The water droplets can indeed act as nanoreactors for the nucleation and growth of the particles, and the added polymer can influence the droplet size, the droplet-droplet interactions, as well as the stability of the surfactant film by the formation of polymer-surfactant complexes. One further advantage of the polymer-modified microemulsions is the possibility to stabilize the primary formed nanoparticles via a polymer adsorption (steric and/or electrostatic stabilization). Thus, the polyelectrolyte-modified nanoparticles formed can be redispersed without flocculation after solvent evaporation.
In a recent contribution in Nature (vol. 442, pp. 555-558) Austin & Vivanco showed that sunlight is the dominant factor for decomposition of grass litter in a semi-arid grassland in Argentine. The quantification of this effect was portrayed as a novel finding. I put this result in the context of three other publications from as early as 1980 that quantified photodegradation. My synopsis shows that photodegradation is an important process in semi-arid grasslands in South America, North America and eastern Europe.
In the present study, photophysical properties of [N]phenylenes were studied by means of stationary and time-resolved absorption and fluorescence spectroscopy (in THF at room temperature). For biphenylene (1) and linear [3]phenylene (2a), internal conversion (IC) with quantum yields ΦIC > 0.99 is by far the dominant mechanism of S1 state deactivation. Angular [3]phenylene (3a), the zig-zag [4]- and [5]phenylenes (3b), (3c), and the triangular [4]phenylene (4) show fluorescence emission with fluorescence quantum yieds and lifetimes between ΦF = 0.07 for (3a) and 0.21 for (3c) and τF = 20 ns for (3a) and 81 ns for (4). Also, compounds (3) and (4) exhibit triplet formation upon photoexcitation with quantum yields as high as ΦISC = 0.45 for (3c). The strong differences in the fluorescence properties and in the triplet fromation efficiencies between (1) and (2a) on one hand and (3) and (4) on the other are related to the remarkable variation of the internal conversion (IC) rate constants kIC. A tentative classification of (1) and (2a) as “fast IC compounds”, with kIC > 109 s-1, and of (3) and (4) as “slow IC compounds”, with kIC ≈ 107 s-1, is suggested. This classification cannot simply be related to Hückel’s rule-type concepts of aromaticity, because the group of “fast IC compounds” consists of “antiaromatic” (1) and “aromatic” (2a), and the group of “slow IC compounds” consists of “antiaromatic” (3b), (4) and “aromatic” (3a), (3c). The IC in the [N]phenylenes is discussed within the framework of the so-called energy gap law established for non-radiative processes in benzenoid hydrocarbons.
The drift time spectra of polycyclic aromatic hydrocarbons (PAH), alkylbenzenes and alkylphenylethers were recorded with a laser-based ion mobility (IM) spectrometer. The ion mobilities of all compounds were determined in helium as drift gas. This allows the calculation of the diffusion cross sections (Omegacalc) on the basis of the exact hard sphere scattering model (EHSSM) and their comparison with the experimentally determined diffusion cross sections (Omegaexp). These Omegaexp/Omegacalc-correlations are presented for molecules with a rigid structure like PAH and prove the reliability of the theoretical model and experimental method. The increase of the selectivity of IM spectrometry is demonstrated using resonance enhanced multiphoton ionisation (REMPI) at atmospheric pressure, realized by tuneable lasers. The REMPI spectra of nine alkylbenzenes and alkylphenylethers are investigated. On the basis of these spectra, the complete qualitative distinction of eight compounds in a mixture is shown. These experiments are extended to alkylbenzene isomer mixtures.
This issue of Linguistics in Potsdam contains a number of papers that grew out of the workshop Descriptive and Empirical Adequacy in Linguistics held in Berlin on December 17-19 December, 2005. One of the goals of this meeting was to bring together scholars working in various frameworks (with emphasis on the Minimalist Program and Optimality Theory) and to discuss matters concerning descriptive and empirical adequacy. Another explicit goal was to discuss the question whether Minimalism and Optimality Theory should be considered incompatible and, hence, competing theories, or whether the two frameworks should rather be considered complementary in certain respects (see http://let.uvt.nl/deal05/call.html for the call for papers). Five of the seven papers in this volume directly grew out of the oral presentations given at the workshop. Although Vieri Samek-Lodovici’s paper was not part of the workshop, it can also be considered a result of the workshop since it pulls together some of his many comments during the discussion time. The paper by Eva Engels and Sten Vikner discusses a phenomenon that received much interest from both minimalist and optimality theoretic syntax in the recent years, Scandinavian object shift. The paper may serve as a practical example for a claim that is repeatedly made in this volume: minimalist and OT analyses, even where they might be competing, can fruitfully inform each other in a constructive manner, leading to a deeper understanding of syntactic phenomena.
The limited capacity of working memory forces people to update its contents continuously. Two aspects of the updating process were investigated in the present experimental series. The first series concerned the question if it is possible to update several representations in parallel. Similar results were obtained for the updating of object features as well as for the updating of whole objects, participants were able to update representations in parallel. The second experimental series addressed the question if working memory representations which were replaced in an updating disappear directly or interfere with the new representations. Evidence for the existence of old representations was found under working memory conditions and under conditions exceeding working memory capacity. These results contradict the hypothesis that working memory contents are protected from proactive interference of long-term memory contents.
Semiclassical asymptotics for the scattering amplitude in the presence of focal points at infinity
(2006)
We consider scattering in $\R^n$, $n\ge 2$, described by the Schr\"odinger operator $P(h)=-h^2\Delta+V$, where $V$ is a short-range potential. With the aid of Maslov theory, we give a geometrical formula for the semiclassical asymptotics as $h\to 0$ of the scattering amplitude $f(\omega_-,\omega_+;\lambda,h)$ $\omega_+\neq\omega_-$) which remains valid in the presence of focal points at infinity (caustics). Crucial for this analysis are precise estimates on the asymptotics of the classical phase trajectories and the relationship between caustics in euclidean phase space and caustics at infinity.
Soils contain a large amount of carbon (C) that is a critical regulator of the global C budget. Already small changes in the processes governing soil C cycling have the potential to release considerable amounts of CO2, a greenhouse gas (GHG), adding additional radiative forcing to the atmosphere and hence to changing climate. Increased temperatures will probably create a feedback, causing soils to release more GHGs. Furthermore changes in soil C balance impact soil fertility and soil quality, potentially degrading soils and reducing soils function as important resource. Consequently the assessment of soil C dynamics under present, recent past and future environmental conditions is not only of scientific interest and requires an integrated consideration of main factors and processes governing soil C dynamics. To perform this assessment an eco-hydrological modelling tool was used and extended by a process-based description of coupled soil carbon and nitrogen turnover. The extended model aims at delivering sound information on soil C storage changes beside changes in water quality, quantity and vegetation growth under global change impacts in meso- to macro-scale river basins, exemplary demonstrated for a Central European river basin (the Elbe). As a result this study: ▪ Provides information on joint effects of land-use (land cover and land management) and climate changes on croplands soil C balance in the Elbe river basin (Central Europe) presently and in the future. ▪ Evaluates which processes, and at what level of process detail, have to be considered to perform an integrated simulation of soil C dynamics at the meso- to macro-scale and demonstrates the model’s capability to simulate these processes compared to observations. ▪ Proposes a process description relating soil C pools and turnover properties to readily measurable quantities. This reduces the number of model parameters, enhances the comparability of model results to observations, and delivers same performance simulating long-term soil C dynamics as other models. ▪ Presents an extensive assessment of the parameter and input data uncertainty and their importance both temporally and spatially on modelling soil C dynamics. For the basin scale assessments it is estimated that croplands in the Elbe basin currently act as a net source of carbon (net annual C flux of 11 g C m-2 yr-1, 1.57 106 tons CO2 yr-1 entire croplands on average). Although this highly depends on the amount of harvest by-products remaining on the field. Future anticipated climate change and observed climate change in the basin already accelerates soil C loss and increases source strengths (additional 3.2 g C m-2 yr-1, 0.48 106 tons CO2 yr-1 entire croplands). But anticipated changes of agro-economic conditions, translating to altered crop share distributions, display stronger effects on soil C storage than climate change. Depending on future use of land expected to fall out of agricultural use in the future (~ 30 % of croplands area as “surplus” land), the basin either considerably looses soil C and the net annual C flux to the atmosphere increases (surplus used as black fallow) or the basin converts to a net sink of C (sequestering 0.44 106 tons CO2 yr-1 under extensified use as ley-arable) or reacts with decrease in source strength when using bioenergy crops. Bioenergy crops additionally offer a considerable potential for fossil fuel substitution (~37 PJ, 1015 J per year), whereas the basin wide use of harvest by-products for energy generation has to be seen critically although offering an annual energy potential of approximately 125 PJ. Harvest by-products play a central role in soil C reproduction and a percentage between 50 and 80 % should remain on the fields in order to maintain soil quality and fertility. The established modelling tool allows quantifying climate, land use and major land management impacts on soil C balance. New is that the SOM turnover description is embedded in an eco-hydrological river basin model, allowing an integrated consideration of water quantity, water quality, vegetation growth, agricultural productivity and soil carbon changes under different environmental conditions. The methodology and assessment presented here demonstrates the potential for integrated assessment of soil C dynamics alongside with other ecosystem services under global change impacts and provides information on the potentials of soils for climate change mitigation (soil C sequestration) and on their soil fertility status.
The primary objective of this work was to develop a laser source for fundamental investigations in the field of laser – materials interactions. In particular it is supposed to facilitate the study of the influence of the temporal energy distribution such as the interaction between adjacent pulses on ablation processes. Therefore, the aim was to design a laser with a highly flexible and easily controllable temporal energy distribution. The laser to meet these demands is an SBS-laser with optional active mode-locking. The nonlinear reflectivity of the SBS-mirror leads to a passive Q-switching and issues ns-pulse bursts with µs spacing. The pulse train parameters such as pulse duration, pulse spacing, pulse energy and number of pulses within a burst can be individually adjusted by tuning the pump parameters and the starting conditions for the laser. Another feature of the SBS-reflection is phase conjugation, which leads to an excellent beam quality thanks to the compensation of phase distortions. Transverse fundamental mode operation and a beam quality better than 1.4 times diffraction limited can be maintained for average output powers of up to 10 W. In addition to the dynamics on a ns-timescale described above, a defined splitting up of each ns-pulse into a train of ps-pulses can be achieved by additional active mode-locking. This twofold temporal focussing of the intensity leads to single pulse energies of up to 2 mJ at pulse durations of approximately 400 ps which corresponds to a pulse peak power of 5 MW. While the pulse duration is of the same order of magnitude as those of other passively Q-switched lasers with simultaneous mode-locking, the pulse energy and pulse peak power exceeds the values of these systems found in the literature by an order of magnitude. To the best of my knowledge the laser presented here is the first implementation of a self-starting mode-locked SBS-laser oscillator. In order to gain a better understanding and control of the transient output of the laser two complementary numerical models were developed. The first is based on laser rate equations which are solved for each laser mode individually while the mode-locking dynamics are calculated from the resultant transient spectrum. The rate equations consider the mean photon densities in the resonator, therefore the propagation of the light inside the resonator is not properly displayed. The second model, in contrast, introduces a spatial resolution of the resonator and hence the propagation inside the resonator can more accurately be considered. Consequently, a mismatch between the loss modulation frequency and the resonator round trip time can be conceived. The model calculates all dynamics in the time domain and therefore the spectral influences such as the Stokes-shift have to be neglected. Both models achieve an excellent reproduction of the ns-dynamics that are generated by the SBS-Q-switch. Separately, each model fails to reproduce all aspects of the ps-dynamics of the SBS-laser in detail. This can be attributed to the complexity of the numerous physical processes involved in this system. But thanks to their complementary nature they provide a very useful tool for investigating the various influences on the dynamics of the mode-locked SBS-laser individually. These aspects can eventually be recomposed to give a complete picture of the mechanisms which govern the output dynamics. Among the aspects under scrutiny were in particular the start resonator quality which determines the starting condition for the SBS-Q-switch, the modulation depth of the AOM and the phonon lifetime as well as the Brillouin-frequency of the SBS-medium. The numerical simulations and the experiments have opened several doors inviting further investigations and promising a potential for further improvement of the experimental results: The results of the simulations in combination with the experimental results which determined the starting conditions for the simulations leave no doubt that the bandwidth generation can primarily be attributed to the SBS-Stokes-shift during the buildup of the Q-switch pulse. For each resonator round trip, bandwidth is generated by shifting a part of the revolving light in frequency. The magnitude of the frequency shift corresponds to the Brillouin-frequency which is a constant of the SBS material and amounts in the case of SF6 to 240 MHz. The modulation of the AOM merely provides an exchange of population between spectrally adjacent modes and therefore diminishes a modulation in the spectrum. By use of a material with a Brillouin-frequency in the GHz range the bandwidth generation can be considerably accelerated thereby shortening the pulse duration. Also, it was demonstrated that yet another nonlinear effect of the SBS can be exploited: If the phonon lifetime is short compared to the resonator round trip time we obtain a modulation in the SBS-reflectivity that supports the modulation of the AOM. The application of an external optical feedback by a conventional mirror turns out to be an alternative to the AOM in synchronizing the longitudinal resonator modes. The interesting feature about this system is that it is ― although highly complex in the physical processes and the temporal output dynamics ― very simple and inexpensive from a technical point of view. No expensive modulators and no control electronics are necessary. Finally, the numerical models constitute a powerful tool for the investigation of emission dynamics of complex laser systems on arbitrary timescales and can also display the spectral evolution of the laser output. In particular it could be demonstrated that differences in the results of the complementary models vanish for systems of lesser complexity.
This paper focuses on some of the factors explaining recent trends in decentralisation, and some areas where decentralisation has had a positive impact, including bringing citizens into public affairs, improving sub-national public administration, and stimulating local economic development. It concludes by exploring the dangers and the implications for governments of differing capabilities starting out on the decentralisation path. More specifically, the paper stresses the underlying incentive structures within states in reform. It suggests a country-specific discussion of both vertical and horizontal incentive structures in decentralisation, as well as clear-cut accountability within a public sector in change. While vertical incentive structures mean defined rules for intergovernmental relationships, horizontal incentive structures mean defined rules between local governments, their citizens and the local private sector. Both sets of incentives need to be reformed jointly to stimulate better results from decentralisation and for better performance of local government. Neglecting one of them, could harm development. Above all, politics and processes are key to understanding, and eventually, managing decentralisation effectively.
The end of the cold war division of the Baltic Sea in 1989, and the three Baltic states’ return to independence in 1991 created new opportunities for the decision-makers of the area, as well as new possibilities for fashioning security in the region. This article will examine the security debate affecting the Baltic Sea region in the post-cold war period, and in particular, the relevance of the European Union to that debate. The following section will examine various concepts of security relevant to the Baltic region; the third section looks at the EU and the Baltic area; and the last part deals with the implications that EU membership by the Baltic Sea states may have for the security of the Baltic Sea zone.
The article mobilises the concept of strategic culture in order to identify the impact of history upon contemporary security policy. The article will first look at the "wholesale construction" of a strategic culture after the Second World War in West Germany before exploring its impact upon security policy since the end of the Cold War in two areas: the Bundeswehr's out-of-area role and conscription. The central argument presented here is that the strategic culture of the former Federal Republic now writ large on to the new united Germany sets the context within which security policies are designed. This strategic culture, as will be argued, acts as both a facilitating and a restraining variable on behaviour, making certain policy options possible and others impossible.
The aim of this study was to provide deeper insights in passerine phylogenetic relationships using new molecular markers. The monophyly of the largest avian order Passeriformes (~59% of all living birds) and the division into its suborders suboscines and oscines are well established. Phylogenetic relationships within the group have been extremely puzzling, as most of the evolutionary lineages originated through rapid radiation. Numerous studies have hypothesised conflicting passerine phylogenies and have repeatedly stimulated further research with new markers. In the present study, I used three different approaches to contribute to the ongoing phylogenetic debate in Passeriformes. I investigated the recently introduced gene ZENK for its phylogenetic utility for passerine systematics in combination and comparison to three already established nuclear markers. My phylogenetic analyses of a comprehensive data set yielded highly resolved, consistent and strongly supported trees. I was able to show the high utility of ZENK for elucidating phylogenetic relationships within Passeriformes. For the second and third approach, I used chicken repeat 1 (CR1) retrotransposons as phylogenetic markers. I presented two specific CR1 insertions as apomorphic characters, whose presence/absence pattern significantly contributed to the resolution of a particular phylogenetic uncertainty, namely the position of the rockfowl species Picathartes spp. in the passerine tree. Based on my results, I suggest a closer relationship of these birds to crows, ravens, jays, and allies. For the third approach, I showed that CR1 sequences contain phylogenetic signal and investigated their applicability in more detail. In this context, I screened for CR1 elements in different passerine birds, used sequences of several loci to construct phylogenetic trees, and evaluated their reliability. I was able to corroborate existing hypotheses and provide strong evidence for some new hypotheses, e.g. I suggest a revision of the taxa Corvidae and Corvinae as vireos are closer related to crows, ravens, and allies. The subdivision of the Passerida into three superfamilies, Sylvioidea, Passeroidea, and Muscicapoidea was strongly supported. I found evidence for a split within Sylvioidea into two clades, one consisting of tits and the other comprising warblers, bulbuls, laughingthrushes, whitethroats, and allies. Whereas Passeridae appear to be paraphyletic, monophyly of weavers and estrild finches as a separate clade was strongly supported. The sister taxon relationships of dippers and the thrushes/flycatcher/chat assemblage was corroborated and I suggest a closer relationship of waxwings and kinglets to wrens, tree-creepers, and nuthatches.
This study introduces a method for multiparallel analysis of small organic compounds in the unicellular green alga Chlamydomonas reinhardtii, one of the premier model organisms in cell biology. The comprehensive study of the changes of metabolite composition, or metabolomics, in response to environmental, genetic or developmental signals is an important complement of other functional genomic techniques in the effort to develop an understanding of how genes, proteins and metabolites are all integrated into a seamless and dynamic network to sustain cellular functions. The sample preparation protocol was optimized to quickly inactivate enzymatic activity, achieve maximum extraction capacity and process large sample quantities. As a result of the rapid sampling, extraction and analysis by gas chromatography coupled to time-of-flight mass spectrometry (GC-TOF) more than 800 analytes from a single sample can be measured, of which over a 100 could be positively identified. As part of the analysis of GC-TOF raw data, aliquot ratio analysis to systematically remove artifact signals and tools for the use of principal component analysis (PCA) on metabolomic datasets are proposed. Cells subjected to nitrogen (N), phosphorus (P), sulfur (S) or iron (Fe) depleted growth conditions develop highly distinctive metabolite profiles with metabolites implicated in many different processes being affected in their concentration during adaptation to nutrient deprivation. Metabolite profiling allowed characterization of both specific and general responses to nutrient deprivation at the metabolite level. Modulation of the substrates for N-assimilation and the oxidative pentose phosphate pathway indicated a priority for maintaining the capability for immediate activation of N assimilation even under conditions of decreased metabolic activity and arrested growth, while the rise in 4-hydroxyproline in S deprived cells could be related to enhanced degradation of proteins of the cell wall. The adaptation to sulfur deficiency was analyzed with greater temporal resolution and responses of wild-type cells were compared with mutant cells deficient in SAC1, an important regulator of the sulfur deficiency response. Whereas concurrent metabolite depletion and accumulation occurs during adaptation to S deprivation in wild-type cells, the sac1 mutant strain is characterized by a massive incapability to sustain many processes that normally lead to transient or permanent accumulation of the levels of certain metabolites or recovery of metabolite levels after initial down-regulation. For most of the steps in arginine biosynthesis in Chlamydomonas mutants have been isolated that are deficient in the respective enzyme activities. Three strains deficient in the activities of N-acetylglutamate-5-phosphate reductase (arg1), N2 acetylornithine-aminotransferase (arg9), and argininosuccinate lyase (arg2), respectively, were analyzed with regard to activation of endogenous arginine biosynthesis after withdrawal of externally supplied arginine. Enzymatic blocks in the arginine biosynthetic pathway could be characterized by precursor accumulation, like the amassment of argininosuccinate in arg2 cells, and depletion of intermediates occurring downstream of the enzymatic block, e.g. N2-acetylornithine, ornithine, and argininosuccinate depletion in arg9 cells. The unexpected finding of substantial levels of the arginine pathway intermediates N-acetylornithine, citrulline, and argininosuccinate downstream the enzymatic block in arg1 cells provided an explanation for the residual growth capacity of these cells in the absence of external arginine sources. The presence of these compounds, together with the unusual accumulation of N-Acetylglutamate, the first intermediate that commits the glutamate backbone to ornithine and arginine biosynthesis, in arg1 cells suggests that alternative pathways, possibly involving the activity of ornithine aminotransferase, may be active when the default reaction sequence to produce ornithine via acetylation of glutamate is disabled.
Uncertainties are pervasive in the Earth System modelling. This is not just due to a lack of knowledge about physical processes but has its seeds in intrinsic, i.e. inevitable and irreducible, uncertainties concerning the process of modelling as well. Therefore, it is indispensable to quantify uncertainty in order to determine, which are robust results under this inherent uncertainty. The central goal of this thesis is to explore how uncertainties map on the properties of interest such as phase space topology and qualitative dynamics of the system. We will address several types of uncertainty and apply methods of dynamical systems theory on a trendsetting field of climate research, i.e. the Indian monsoon. For the systematic analysis concerning the different facets of uncertainty, a box model of the Indian monsoon is investigated, which shows a saddle node bifurcation against those parameters that influence the heat budget of the system and that goes along with a regime shift from a wet to a dry summer monsoon. As some of these parameters are crucially influenced by anthropogenic perturbations, the question is whether the occurrence of this bifurcation is robust against uncertainties in parameters and in the number of considered processes and secondly, whether the bifurcation can be reached under climate change. Results indicate, for example, the robustness of the bifurcation point against all considered parameter uncertainties. The possibility of reaching the critical point under climate change seems rather improbable. A novel method is applied for the analysis of the occurrence and the position of the bifurcation point in the monsoon model against parameter uncertainties. This method combines two standard approaches: a bifurcation analysis with multi-parameter ensemble simulations. As a model-independent and therefore universal procedure, this method allows investigating the uncertainty referring to a bifurcation in a high dimensional parameter space in many other models. With the monsoon model the uncertainty about the external influence of El Niño / Southern Oscillation (ENSO) is determined. There is evidence that ENSO influences the variability of the Indian monsoon, but the underlying physical mechanism is discussed controversially. As a contribution to the debate three different hypotheses are tested of how ENSO and the Indian summer monsoon are linked. In this thesis the coupling through the trade winds is identified as key in linking these two key climate constituents. On the basis of this physical mechanism the observed monsoon rainfall data can be reproduced to a great extent. Moreover, this mechanism can be identified in two general circulation models (GCMs) for the present day situation and for future projections under climate change. Furthermore, uncertainties in the process of coupling models are investigated, where the focus is on a comparison of forced dynamics as opposed to fully coupled dynamics. The former describes a particular type of coupling, where the dynamics from one sub-module is substituted by data. Intrinsic uncertainties and constraints are identified that prevent the consistency of a forced model with its fully coupled counterpart. Qualitative discrepancies between the two modelling approaches are highlighted, which lead to an overestimation of predictability and produce artificial predictability in the forced system. The results suggest that bistability and intermittent predictability, when found in a forced model set-up, should always be cross-validated with alternative coupling designs before being taken for granted. All in this, this thesis contributes to the fundamental issue of dealing with uncertainties the climate modelling community is confronted with. Although some uncertainties allow for including them in the interpretation of the model results, intrinsic uncertainties could be identified, which are inevitable within a certain modelling paradigm and are provoked by the specific modelling approach.
Contents: 1. Introduction 2. Migration and Assimilation – Theoretical Approaches 2.1 Meaning and Definition of the Terms Migration and Migrant 2.2 Milton M. Gordon – Sub Processes of Assimilation 2.3 Hartmut Esser - Acculturation, Integration, and Assimilation 2.4 The Concept of Integration and Assimilation 2.5 Straight–line Assimilation and its Implications 2.6 Segmented Assimilation and its Implications 3. Social Inequality and Welfare – Theoretical Approaches 3.1 Dimensions of Inequality 3.2 Welfare Regimes and Social Inequality 3.3 Migration, Assimilation and Inequality 4. Research Design 4.1 Research Question and General Proceeding 4.2 Sample and Data Base 4.3 Operationalisation and Indicators 5. Migration, Welfare and Inequality in Three European Countries 6. Empirical Results 6.1 Performance of Migrants Compared With Natives 6.2 Different Trajectories of Assimilation 6.3 Trajectories of Segmented Assimilation and their Determinants 6.4 Policies, Attitudes and Assimilation – An Aggregate Analysis 6.5 Summary – What Determines the Performance of Migrants? 7. Discussion of Empirical Results in Terms of Theoretical Approaches 7.1 The Situation of Migrants in Three European Countries 7.2 Assessment of the Trajectories of Assimilation 8. Conclusion – Future Prospects of Migration in Europe
Observers of international politics have been conscious of the growing international involvement of non-central governments (NCGs), particularly in federal systems. These have been supplemented by the internationalisation of subnational actors in quasi-federal and even unitary states. One of the difficulties is that analysis has often been locked into the dominant paradigm debate in International Relations concerning who and who are not significant actors. Having briefly explored the nature of this changing environment, marked by a growing emphasis on access rather than control as a policy objective and the emergence of what is termed a 'catalytic diplomacy', the discussion focuses on the need for linkage between the levels of government in the pursuit of international as well as domestic policy goals. The nature of linkage mechanisms are discussed.
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
Die vorliegende Arbeit stellt eine kritische Übersicht über den Forschungsstand zu multiplen Wh-Konstruktionen im Slavischen dar. Das Ziel ist es, die Unklarheit der Datenlage und die Widersprüchlichkeit der auf solchen "unklaren" Daten basierten Theorien aufzuzeigen. Inhalt: Historischer Hintergrund (Wachowicz 1974) Einige ältere Ansätze Höhepunkt: die folgenschwere Arbeit von Rudin (1988) Probleme: - Das Problem der Zuverlässlichkeit von Daten - Das Problem der Relevanz von Daten "Harte" Fakten: - Strikte Superioritätseffekte im Bulgarischen - Obligatorische Wh-Anhebung im Slavischen Neuere Ansätze: - "Qualitative" Ansätze - "Quantitative" Ansätze - Alternative Ansätze
This is the first issue of a series in which affiliates of the Institute of Linguistics report the results of their experimental work. Generative linguistics usually rely on the method of native speaker judgements in order to test their hypotheses. If a hypothesis rules out a set of sentences, linguistics can ask native speakers whether they feel these sentences are indeed ungrammatical in their language. There are, however, circumstances where this method is unreliable. In such cases a more elaborate method to test a hypothesis is called. All papes in this series, and hence, all papers in this volume deal with issues that cannot be reliably tested with native speaker judgements. This volume contains 7 papers, all using different methods and finding answers to very different questions. This heterogenity, by the way, reflects the various interests and research programs of the institute. The paper, by Trutkowski, Zugck, Blaszczak, Fanselow, Fischer and Vogel deals with superiority in 10 Indo-European languages. The paper by Schlesewsky, Fanselow and Frisch and by Schlesewsky and Frisch, deal with the role of case in processing German sentences. The paper by Vogel and Frisch deals with resolving case conflicts, as does the paper by Vogel and Zugck. The nature of partitive case is the topic of the paper by Fischer. The paper by K?gler deals with the realization of question intonation in two German dialects. We hope that you enjoy reading the papers!
When Galactic microlensing events of stars are observed, one usually measures a symmetric light curve corresponding to a single lens, or an asymmetric light curve, often with caustic crossings, in the case of a binary lens system. In principle, the fraction of binary stars at a certain separation range can be estimated based on the number of measured microlensing events. However, a binary system may produce a light curve which can be fitted well as a single lens light curve, in particullary if the data sampling is poor and the errorbars are large. We investigate what fraction of microlensing events produced by binary stars for different separations may be well fitted by and hence misinterpreted as single lens events for various observational conditions. We find that this fraction strongly depends on the separation of the binary components, reaching its minimum at between 0.6 and 1.0 Einstein radius, where it is still of the order of 5% The Einstein radius is corresponding to few A.U. for typical Galactic microlensing scenarios. The rate for misinterpretation is higher for short microlensing events lasting up to few months and events with smaller maximum amplification. For fixed separation it increases for binaries with more extreme mass ratios. Problem of degeneracy in photometric light curve solution between binary lens and binary source microlensing events was studied on simulated data, and data observed by the PLANET collaboration. The fitting code BISCO using the PIKAIA genetic algorithm optimizing routine was written for optimizing binary-source microlensing light curves observed at different sites, in I, R and V photometric bands. Tests on simulated microlensing light curves show that BISCO is successful in finding the solution to a binary-source event in a very wide parameter space. Flux ratio method is suggested in this work for breaking degeneracy between binary-lens and binary-source photometric light curves. Models show that only a few additional data points in photometric V band, together with a full light curve in I band, will enable breaking the degeneracy. Very good data quality and dense data sampling, combined with accurate binary lens and binary source modeling, yielded the discovery of the lowest-mass planet discovered outside of the Solar System so far, OGLE-2005-BLG-390Lb, having only 5.5 Earth masses. This was the first observed microlensing event in which the degeneracy between a planetary binary-lens and an extreme flux ratio binary-source model has been successfully broken. For events OGLE-2003-BLG-222 and OGLE-2004-BLG-347, the degeneracy was encountered despite of very dense data sampling. From light curve modeling and stellar evolution theory, there was a slight preference to explain OGLE-2003-BLG-222 as a binary source event, and OGLE-2004-BLG-347 as a binary lens event. However, without spectra, this degeneracy cannot be fully broken. No planet was found so far around a white dwarf, though it is believed that Jovian planets should survive the late stages of stellar evolution, and that white dwarfs will retain planetary systems in wide orbits. We want to perform high precision astrometric observations of nearby white dwarfs in wide binary systems with red dwarfs in order to find planets around white dwarfs. We selected a sample of observing targets (WD-RD binary systems, not published yet), which can possibly have planets around the WD component, and modeled synthetic astrometric orbits which can be observed for these targets using existing and future astrometric facilities. Modeling was performed for the astrometric accuracy of 0.01, 0.1, and 1.0 mas, separation between WD and planet of 3 and 5 A.U., binary system separation of 30 A.U., planet masses of 10 Earth masses, 1 and 10 Jupiter masses, WD mass of 0.5M and 1.0 Solar masses, and distances to the system of 10, 20 and 30 pc. It was found that the PRIMA facility at the VLTI will be able to detect planets around white dwarfs once it is operating, by measuring the astrometric wobble of the WD due to a planet companion, down to 1 Jupiter mass. We show for the simulated observations that it is possible to model the orbits and find the parameters describing the potential planetary systems.
This diploma thesis deals with the process of political and administrative decentralisation in the Kingdom of Lesotho. Although decentralization in itself does not automatically lead to development it became an integral part of reform processes in many developing countries. Governments and international donors consider efficient decentralized political and administrative structures as essential elements of “good governance” and a prerequisite for structural poverty alleviation. This paper seeks to analyse how the given decentralization strategy and its implementation is affecting different features of good governance in the case of Lesotho. The results of the analysis confirm that the decentralisation process significantly improved political participation of the local population. However, the second objective of enhancing efficiency through decentralisation was not achieved. To the contrary, in the institutional design of the newly created local authorities and in the civil service recruitment policy efficiency considerations did not matter. Additionally, the created mechanisms for political participation generate relevant costs. Thus it is impossible to judge unambiguously on the contribution of decentralisation to the achievement of good governance. Different subtargets of good governance are influenced contrarily. Consequently, the adequacy of the concept of good governance as a guiding concept for decentralisation policies can be questioned. The assessment of the success of decentralisation policies requires a normative framework that takes into account the relations between both participation and efficiency. Despite the partly reduced administrative efficiency the author’s overall impression of the decentralisation process in Lesotho is positive. The establishment of democratically legitimised and participatory local governments justifies certain additional expenditure. However, mistakes in the design and the implementation of the decentralisation strategy would have been avoidable.
Since their discovery in 1610 by Galileo Galilei, Saturn's rings continue to fascinate both experts and amateurs. Countless numbers of icy grains in almost Keplerian orbits reveal a wealth of structures such as ringlets, voids and gaps, wakes and waves, and many more. Grains are found to increase in size with increasing radial distance to Saturn. Recently discovered "propeller" structures in the Cassini spacecraft data, provide evidence for the existence of embedded moonlets. In the wake of these findings, the discussion resumes about origin and evolution of planetary rings, and growth processes in tidal environments. In this thesis, a contact model for binary adhesive, viscoelastic collisions is developed that accounts for agglomeration as well as restitution. Collisional outcomes are crucially determined by the impact speed and masses of the collision partners and yield a maximal impact velocity at which agglomeration still occurs. Based on the latter, a self-consistent kinetic concept is proposed. The model considers all possible collisional outcomes as there are coagulation, restitution, and fragmentation. Emphasizing the evolution of the mass spectrum and furthermore concentrating on coagulation alone, a coagulation equation, including a restricted sticking probability is derived. The otherwise phenomenological Smoluchowski equation is reproduced from basic principles and denotes a limit case to the derived coagulation equation. Qualitative and quantitative analysis of the relevance of adhesion to force-free granular gases and to those under the influence of Keplerian shear is investigated. Capture probability, agglomerate stability, and the mass spectrum evolution are investigated in the context of adhesive interactions. A size dependent radial limit distance from the central planet is obtained refining the Roche criterion. Furthermore, capture probability in the presence of adhesion is generally different compared to the case of pure gravitational capture. In contrast to a Smoluchowski-type evolution of the mass spectrum, numerical simulations of the obtained coagulation equation revealed, that a transition from smaller grains to larger bodies cannot occur via a collisional cascade alone. For parameters used in this study, effective growth ceases at an average size of centimeters.
This volume offers new arguments and perspectives in the ongoing debate about the optimal analysis of verb movement, mainly, but not exclusively, in German. Fanselow and Meinunger deal with verb second (V2) movement in German main clauses. Fanselow argues that head movement of the substitution type follows the standard minimalist conceptions of Merge and Move and is therefore not subject to the same objections as head movement as head adjunction which violates Chomsky's minimalist extension condition, operates countercyclically, and fails to let the moved head c-command its trace. Fanselow argues for V2 movement as head movement of the substitution type. Meinunger discusses a restriction on V2 movement imposed by phrases like "mehr als" ('more than'), as in "Der Wert hat sich weit mehr als verdreifacht" ('the value has far more than tripled') where V2 movement is ruled out (cf. *"Der Wert verdreifachte sich mehr als"). Meinunger claims that this restriction is best analysed in phonological terms: the preposition/complementiser "als" acts as a prefixal clitic to its host, the finite verb, which therefore may not move without it. With respect to the V2 debate, Meinunger argues for an interface perspective. He shows that V2 is restricted from both the conceptual and the phonological interface. Vogel, finally, discusses the syntax of clause-final verbal complexes and their dialectal variation in German. He compares three different syntactic analyses, a minimalist head movement analysis, a minimalist XP movement analysis, and an Optimality theoretic PF movement analysis. The three accounts are evaluated relative to the additional assumptions they have to make, the complications they face and how they fit the observations. Vogel argues in favour of the phonologically oriented OT analysis because of its ability to create a direct link between the coming about of a particular word order pattern and its basically phonological trigger. Each of the three papers recognises the relevance of surface forms in the analysis of German verb movement. They differ, however in the extent to which phonological aspects take part in the explanations they offer.
One type of internal diachronic change that has been extensively studied for spoken languages is grammaticalization whereby lexical elements develop into free or bound grammatical elements. Based on a wealth of spoken languages, a large amount of prototypical grammaticalization pathways has been identified. Moreover, it has been shown that desemanticization, decategorialization, and phonetic erosion are typical characteristics of grammaticalization processes. Not surprisingly, grammaticalization is also responsible for diachronic change in sign languages. Drawing data from a fair number of sign languages, we show that grammaticalization in visual-gestural languages – as far as the development from lexical to grammatical element is concerned – follows the same developmental pathways as in spoken languages. That is, the proposed pathways are modalityindependent. Besides these intriguing parallels, however, sign languages have the possibility of developing grammatical markers from manual and non-manual co-speech gestures. We will discuss various instances of grammaticalized gestures and we will also briefly address the issue of the modality-specificity of this phenomenon.
<img src="http://vg00.met.vgwort.de/na/806c85cec18906a64e06" width="1" height="1" alt=""> Subject of this work is the possibility to synchronize nonlinear systems via correlated noise and automatic control. The thesis is divided into two parts. The first part is motivated by field studies on feral sheep populations on two islands of the St. Kilda archipelago, which revealed strong correlations due to environmental noise. For a linear system the population correlation equals the noise correlation (Moran effect). But there exists no systematic examination of the properties of nonlinear maps under the influence of correlated noise. Therefore, in the first part of this thesis the noise-induced correlation of logistic maps is systematically examined. For small noise intensities it can be shown analytically that the correlation of quadratic maps in the fixed-point regime is always smaller than or equal to the noise correlation. In the period-2 regime a Markov model explains qualitatively the main dynamical characteristics. Furthermore, two different mechanisms are introduced which lead to a higher correlation of the systems than the environmental correlation. The new effect of "correlation resonance" is described, i. e. the correlation yields a maximum depending on the noise intensity. In the second part of the thesis an automatic control method is presented which synchronizes different systems in a robust way. This method is inspired by phase-locked loops and is based on a feedback loop with a differential control scheme, which allows to change the phases of the controlled systems. The effectiveness of the approach is demonstrated for controlled phase synchronization of regular oscillators and foodweb models.
Contents: 1. Capitalist societies as market-bargaining societies on the basis of resources of action: The idealtypical bargain between capital and labour; an alternative to Marx' theory of exploitation - Discussion of the model 2. A general typology of paths of societies in history and a characterisation of state socialism - People's capitalisms as perspective of development - What remains from Marx' ideas? 3. Variations of welfare capitalism after the decline of state socialism 3.1 National differences of welfare capitalism 3.2 Overall inequality of income and overall class consciousness 3.3 Explaining income inequality and variation in class consciousness by class and gender 3.3.1 A test of different class models in the FRG 3.3.2 Developing an international model of gendered occupational and employment status as bundles of resources of action 4. Summary
brandial06 was the tenth in a series of workshops that aims to bring together researchers working on the semantics and pragmatics of dialogues in fields such as artificial intelligence, formal semantics and pragmatics, computational linguistics, philosophy, and psychology. This volume collects all presented papers and posters and gives abstracts of the invited talks.
On the basis of the Dynamic Syntax framework, this paper argues that the production pressures in dialogue determining alignment effects and given versus new informational effects also drive the shift from case-rich free word order systems without clitic pronouns into systems with clitic pronouns with rigid relative ordering. The paper introduces assumptions of Dynamic Syntax, in particular the building up of interpretation through structural underspecification and update, sketches the attendant account of production with close coordination of parsing and production strategies, and shows how what was at the Latin stage a purely pragmatic, production-driven decision about linear ordering becomes encoded in the clitics in theMedieval Spanish system which then through successive steps of routinization yield the modern systems with immediately pre-verbal fixed clitic templates.
We analyze anaphoric phenomena in the context of building an input understanding component for a conversational system for tutoring mathematics. In this paper, we report the results of data analysis of two sets of corpora of dialogs on mathematical theorem proving. We exemplify anaphoric phenomena, identify factors relevant to anaphora resolution in our domain and extensions to the input interpretation component to support it.
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
Goal-oriented dialog as a collaborative subordinated activity involving collective acceptance
(2006)
Modeling dialog as a collaborative activity consists notably in specifying the contain of the Conversational Common Ground and the kind of social mental state involved. In previous work (Saget, 2006), we claim that Collective Acceptance is the proper social attitude for modeling Conversational Common Ground in the particular case of goal-oriented dialog. We provide a formalization of Collective Acceptance, besides elements in order to integrate this attitude in a rational model of dialog are provided; and finally, a model of referential acts as being part of a collaborative activity is provided. The particular case of reference has been chosen in order to exemplify our claims.
A key problem for models of dialogue is to explain the mechanisms involved in generating and responding to clarification requests. We report a 'Maze task' experiment that investigates the effect of 'spoof' clarification requests on the development of semantic co-ordination. The results provide evidence of both local and global semantic co-ordination phenomena that are not captured by existing dialogue co-ordination models.
How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We ran a series of experiments on multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. The lexicon diverges at a faster rate for an introspective population, eventually collapsing to one single form which is associated with all meanings. This contrasts sharply with MCI capable populations in which a lexicon is maintained, where every meaning is associated with a unique word. We also investigated the effect of increasing the meaning space and showed that it speeds up the lexicon divergence for all populations irrespective of their acquisition method.
Demonstratives, in particular gestures that "only" accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VRapplications increases the need for theories ofmultimodal structures and events. In our workshopcontribution we focus on the integration of multimodal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.
Classical SDRT (Asher and Lascarides, 2003) discussed essential features of dialogue like adjacency pairs or corrections and up-dating. Recent work in SDRT (Asher, 2002, 2005) aims at the description of natural dialogue. We use this work to model situated communication, i.e. dialogue, in which sub-sentential utterances and gestures (pointing and grasping) are used as conventional modes of communication. We show that in addition to cognitive modelling in SDRT, capturing mental states and speech-act related goals, special postulates are needed to extract meaning out of contexts. Gestural meaning anchors Discourse Referents in contextually given domains. Both sorts of meaning are fused with the meaning of fragments to get at fully developed dialogue moves. This task accomplished, the standard SDRT machinery, tagged SDRSs, rhetorical relations, the up-date mechanism, and the Maximize Discourse Coherence constraint generate coherent structures. In sum, meanings from different verbal and non-verbal sources are assembled using extended SDRT to form coherent wholes.
We present a formal analysis of iconic coverbal gesture. Our model describes the incomplete meaning of gesture that’s derivable from its form, and the pragmatic reasoning that yields a more specific interpretation. Our formalism builds on established models of discourse interpretation to capture key insights from the descriptive literature on gesture: synchronous speech and gesture express a single thought, but while the form of iconic gesture is an important clue to its interpretation, the content of gesture can be resolved only by linking it to its context.
In two experiments, many annotators marked antecedents for discourse deixis as unconstrained regions of text. The experiments show that annotators do converge on the identity of these text regions, though much of what they do can be captured by a simple model. Demonstrative pronouns are more likely than definite descriptions to be marked with discourse antecedents. We suggest that our methodology is suitable for the systematic study of discourse deixis.
We present a new analysis of illocutionary forces in dialogue. We analyze them as complex conversational moves involving two dimensions: what Speaker commits herself to and what she calls on Addressee to perform. We start from the analysis of speech acts such as confirmation requests or whimperatives, and extend the analysis to seemingly simple speech acts, such as statements and queries. Then, we show how to integrate our proposal in the framework of the Grammar for Conversation (Ginzburg, to app.), which is adequate for modelling agents' information states and how they get updated.
Claiming that cross-speaker "but" can signal correction in dialogue, we start by describing the types of corrections "but" can communicate by focusing on the Speech Act (SA) communicated in the previous turn and address the ways in which "but" can correct what is communicated. We address whether "but" corrects the proposition, the direct SA or the discourse relation communicated in the previous turn. We will also briefly address other relations signalled by cross-turn "but". After presenting a typology of the situations "but" can correct, we will address how these corrections can be modelled in the Information State model of dialogue, motivating this work by showing how it can be used to potentially avoid misunderstandings. We wrap up by showing how the model presented here updates beliefs in the Information State representation of the dialogue and can be used to facilitate response deliberation.
An account is presented of the focus properties, common ground effect and dialogue behaviour of the accented German discourse marker "doch" and the accented sentence negation "nicht". It is argued that "doch" and "nicht" evoke as a focus alternative the logical complement of the proposition expressed by the sentence in which they occur, and that an analysis in terms of contrastive focus accounts for their effect on the common ground and their function in dialogue.
The performance of a home-built tunable diode laser (TDL) spectrometer, aimed at multi-line detection of carbon dioxide, has been evaluated and optimized. In the regime of the (30<SUP>0</SUP>1)<SUB>III</SUB> / (000) band of <SUP>12</SUP>CO<SUB>2</SUB> around 1.6 μm, the dominating isotope species <SUP>12</SUP>CO<SUB>2</SUB>, <SUP>13</SUP>CO<SUB>2</SUB>, and <SUP>12</SUP>C<SUP>18</SUP>O<SUP>16</SUP>O were detected simultaneously without interference by water vapor. Detection limits in the range of few ppmv were obtained for each species utilizing wavelength modulation (WM) spectroscopy with balanced detection in a long-path absorption cell set-up. High sensitivity in conjunction with high precision —typically ±1‰ and ±6‰ for 3% and 0.7% of CO<SUB>2</SUB>, respectively— renders this experimental approach a promising analytical concept for isotope-ratio determination of carbon dioxide in soil and breath gas. For a moderate <SUP>12</SUP>CO<SUB>2</SUB> line, the pressure dependence of the line profile was characterized in detail, to account for pressure effects on sensitive measurements.
Improvement of a fluorescence immunoassay with a compact diode-pumped solid state laser at 315 nm
(2006)
We demonstrate the improvement of fluorescence immunoassay (FIA) diagnostics in deploying a newly developed compact diode-pumped solid state (DPSS) laser with emission at 315 nm. The laser is based on the quasi-three-level transition in Nd:YAG at 946 nm. The pulsed operation is either realized by an active Q-switch using an electro-optical device or by introduction of a Cr<SUP>4+</SUP>:YAG saturable absorber as passive Q-switch element. By extra-cavity second harmonic generation in different nonlinear crystal media we obtained blue light at 473 nm. Subsequent mixing of the fundamental and the second harmonic in a β-barium-borate crystal provided pulsed emission at 315 nm with up to 20 μJ maximum pulse energy and 17 ns pulse duration. Substitution of a nitrogen laser in a FIA diagnostics system by the DPSS laser succeeded in considerable improvement of the detection limit. Despite significantly lower pulse energies (7 μJ DPSS laser versus 150 μJ nitrogen laser), in preliminary investigations the limit of detection was reduced by a factor of three for a typical FIA.
The performance of a home-built tunable diode laser (TDL) spectrometer has been optimized regarding multi-line detection of carbon dioxide in natural gases. In the regime of the (30<SUP>0</SUP>1)<SUB>III</SUB> ← (000) band of <SUP>12</SUP>CO<SUB>2</SUB> around 1.6 μm, the dominating isotope species <SUP>12</SUP>CO<SUB>2</SUB>, <SUP>13</SUP>CO<SUB>2</SUB>, and <SUP>12</SUP>C<SUP>18</SUP>O<SUP>16</SUP>O were detected simultaneously. In contrast to most established techniques, selective measurements are performed without any sample preparation. This is possible since the CO<SUB>2</SUB> detection is free of interference from water, ubiquitous in natural gases. Detection limits in the range of a few ppmv were obtained for each species utilizing wavelength modulation (WM) spectroscopy with balanced detection in a long-path absorption cell set-up. Linear calibration plots cover a dynamic range of four orders of magnitude, allowing for quantitative CO<SUB>2</SUB> detection in various samples, like soil and breath gas. High isotopic resolution enables the excellent selectivity, sensitivity, and stability of the chosen analytical concept. The obtained isotopic resolution of typically ± 1.0 ‰ and ± 1.5 ‰ (for 3 vol. % and 0.7 vol. % of CO<SUB>2</SUB>, respectively) offers a promising analytical tool for isotope-ratio determination of carbon dioxide in soil gas. Preliminary experiments on soil respiration for the first time combine the on-line quantification of the overall carbon dioxide content with an optode sensor and isotopic determination (TDL system) of natural gas species.
Near-infrared (NIR) absorption spectroscopy with tunable diode lasers allows the simultaneous detection of the three most important isotopologues of carbon dioxide (<SUP>12</SUP>CO<SUB>2</SUB>, <SUP>13</SUP>CO<SUB>2</SUB>, <SUP>12</SUP>C<SUP>18</SUP>O<SUP>16</SUP>O) and carbon monoxide (<SUP>12</SUP>CO, <SUP>13</SUP>CO, <SUP>12</SUP>C<SUP>18</SUP>O). The flexible and compact fiber-optic tunable diode laser absorption spectrometer (TDLAS) allows selective measurements of CO<SUB>2</SUB> and CO with high isotopic resolution without sample preparation since there is no interference with water vapour. For each species, linear calibration plots with a dynamic range of four orders of magnitude and detection limits (LOD) in the range of a few ppm were obtained utilizing wavelength modulation spectroscopy (WMS) with balanced detection in a Herriott-type multipass cell. The high performance of the apparatus is illustrated by fill-evacuation-refill cycles.
Two examples of our biophotonic research utilizing nanoparticles are presented, namely laser-based fluoroimmuno analysis and in-vivo optical oxygen monitoring. Results of the work include significantly enhanced sensitivity of a homogeneous fluorescence immunoassay and markedly improved spatial resolution of oxygen gradients in root nodules of a legume species.