Refine
Has Fulltext
- yes (177) (remove)
Year of publication
- 2006 (177) (remove)
Document Type
- Doctoral Thesis (46)
- Article (32)
- Conference Proceeding (32)
- Postprint (26)
- Preprint (24)
- Monograph/Edited Volume (10)
- Master's Thesis (3)
- Working Paper (2)
- Part of a Book (1)
- Other (1)
Language
- English (177) (remove)
Keywords
- Fluoreszenz-Resonanz-Energie-Transfer (3)
- Immunoassay (3)
- Nanopartikel (3)
- Nichtlineare Dynamik (3)
- Optimality Theory (3)
- Polyelektrolyt (3)
- climate change (3)
- Ackerschmalwand (2)
- Adverbial Quantification (2)
- Adverbs of Frequency (2)
Institute
- Extern (37)
- Institut für Mathematik (26)
- Department Linguistik (21)
- Institut für Physik und Astronomie (19)
- Institut für Anglistik und Amerikanistik (17)
- Institut für Biochemie und Biologie (17)
- Institut für Chemie (14)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (11)
- Institut für Geowissenschaften (9)
- Department Psychologie (7)
- Institut für Informatik und Computational Science (6)
- Wirtschaftswissenschaften (5)
- Institut für Umweltwissenschaften und Geographie (4)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- WeltTrends e.V. Potsdam (3)
- Historisches Institut (2)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (2)
- Department Erziehungswissenschaft (1)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Romanistik (1)
- Sozialwissenschaften (1)
Since their discovery in 1610 by Galileo Galilei, Saturn's rings continue to fascinate both experts and amateurs. Countless numbers of icy grains in almost Keplerian orbits reveal a wealth of structures such as ringlets, voids and gaps, wakes and waves, and many more. Grains are found to increase in size with increasing radial distance to Saturn. Recently discovered "propeller" structures in the Cassini spacecraft data, provide evidence for the existence of embedded moonlets. In the wake of these findings, the discussion resumes about origin and evolution of planetary rings, and growth processes in tidal environments. In this thesis, a contact model for binary adhesive, viscoelastic collisions is developed that accounts for agglomeration as well as restitution. Collisional outcomes are crucially determined by the impact speed and masses of the collision partners and yield a maximal impact velocity at which agglomeration still occurs. Based on the latter, a self-consistent kinetic concept is proposed. The model considers all possible collisional outcomes as there are coagulation, restitution, and fragmentation. Emphasizing the evolution of the mass spectrum and furthermore concentrating on coagulation alone, a coagulation equation, including a restricted sticking probability is derived. The otherwise phenomenological Smoluchowski equation is reproduced from basic principles and denotes a limit case to the derived coagulation equation. Qualitative and quantitative analysis of the relevance of adhesion to force-free granular gases and to those under the influence of Keplerian shear is investigated. Capture probability, agglomerate stability, and the mass spectrum evolution are investigated in the context of adhesive interactions. A size dependent radial limit distance from the central planet is obtained refining the Roche criterion. Furthermore, capture probability in the presence of adhesion is generally different compared to the case of pure gravitational capture. In contrast to a Smoluchowski-type evolution of the mass spectrum, numerical simulations of the obtained coagulation equation revealed, that a transition from smaller grains to larger bodies cannot occur via a collisional cascade alone. For parameters used in this study, effective growth ceases at an average size of centimeters.
In Allefeld & Kurths [2004], we introduced an approach to multivariate phase synchronization analysis in the form of a Synchronization Cluster Analysis (SCA). A statistical model of a synchronization cluster was described, and an abbreviated instruction on how to apply this model to empirical data was given, while an implementation of the corresponding algorithm was (and is) available from the authors. In this letter, the complete details on how the data analysis algorithm is to be derived from the model are filled in.
Biochemical and physiological studies of Arabidopsis thaliana Diacylglycerol Kinase 7 (AtDGK7)
(2006)
A family of diacylglycerol kinases (DGK) phosphorylates the substrate diacylglycerol (DAG) to generate phosphatidic acid (PA) . Both molecules, DAG and PA, are involved in signal transduction pathways. In the model plant Arabidopsis thaliana, seven candidate genes (named AtDGK1 to AtDGK7) code for putative DGK isoforms. Here I report the molecular cloning and characterization of AtDGK7. Biochemical, molecular and physiological experiments of AtDGK7 and their corresponding enzyme are analyzed. Information from Genevestigator says that AtDGK7 gene is expressed in seedlings and adult Arabidopsis plants, especially in flowers. The AtDGK7 gene encodes the smallest functional DGK predicted in higher plants; but also, has an alternative coding sequence containing an extended AtDGK7 open reading frame, confirmed by PCR and submitted to the GenBank database (under the accession number DQ350135). The new cDNA has an extension of 439 nucleotides coding for 118 additional amino acids The former AtDGK7 enzyme has a predicted molecular mass of ~41 kDa and its activity is affected by pH and detergents. The DGK inhibitor R59022 also affects AtDGK7 activity, although at higher concentrations (i.e. IC50 ~380 µM). The AtDGK7 enzyme also shows a Michaelis-Menten type saturation curve for 1,2-DOG. Calculated Km and Vmax were 36 µM 1,2-DOG and 0.18 pmol PA min-1 mg of protein-1, respectively, under the assay conditions. Former protein AtDGK7 are able to phosphorylate different DAG analogs that are typically found in plants. The new deduced AtDGK7 protein harbors the catalytic DGKc and accessory domains DGKa, instead the truncated one as the former AtDGK7 protein (Gomez-Merino et al., 2005).
In two experiments, many annotators marked antecedents for discourse deixis as unconstrained regions of text. The experiments show that annotators do converge on the identity of these text regions, though much of what they do can be captured by a simple model. Demonstrative pronouns are more likely than definite descriptions to be marked with discourse antecedents. We suggest that our methodology is suitable for the systematic study of discourse deixis.
Nonaqueous synthesis of metal oxide nanoparticles and their assembly into mesoporous materials
(2006)
This thesis mainly consist of two parts, the synthesis of several kinds of technologically interesting crystalline metal oxide nanoparticles via nonaqueous sol-gel process and the formation of mesoporous metal oxides using some of these nanoparticles as building blocks via evaporation induced self-assembly (EISA) technique. In the first part, the experimental procedures and characterization results of successful syntheses of crystalline tin oxide and tin doped indium oxide (ITO) nanoparticles are reported. SnO2 nanoparticles exhibit monodisperse particle size (3.5 nm in average), high crystallinity and particularly high dispersibility in THF, which enable them to become the ideal particulate precursor for the formation of mesoporous SnO2. ITO nanoparticles possess uniform particle morphology, narrow particle size distribution (5-10 nm), high crystallinity as well as high electrical conductivity. The synthesis approaches and characterization of various mesoporous metal oxides, including TiO2, SnO2, mixture of CeO2 and TiO2, mixture of BaTiO3 and SnO2, are reported in the second part of this thesis. Mesoporous TiO2 and SnO2 are presented as highlights of this part. Mesoporous TiO2 was produced in the forms of both films and bulk material. In the case of mesoporous SnO2, the study was focused on the high order of the porous structure. All these mesoporous metal oxides show high crystallinity, high surface area and rather monodisperse pore sizes, which demonstrate the validity of EISA process and the usage of preformed crystalline nanoparticles as nanobuilding blocks (NBBs) to produce mesoporous metal oxides.
Quantum dots (QDs) are common as luminescing markers for imaging in biological applications because their optical properties seem to be inert against their surrounding solvent. This, together with broad and strong absorption bands and intense, sharp tuneable luminescence bands, makes them interesting candidates for methods utilizing Förster Resonance Energy Transfer (FRET), e. g. for sensitive homogeneous fluoroimmunoassays (FIA). In this work we demonstrate energy transfer from Eu<SUP>3+</SUP>-trisbipyridin (Eu-TBP) donors to CdSe-ZnS-QD acceptors in solutions with and without serum. The QDs are commercially available CdSe-ZnS core-shell particles emitting at 655 nm (QD655). The FRET system was achieved by the binding of the streptavidin conjugated donors with the biotin conjugated acceptors. After excitation of Eu-TBP and as result of the energy transfer, the luminescence of the QD655 acceptors also showed lengthened decay times like the donors. The energy transfer efficiency, as calculated from the decay times of the bound and the unbound components, amounted to 37%. The Förster-radius, estimated from the absorption and emission bands, was ca. 77 Å. The effective binding ratio, which not only depends on the ratio of binding pairs but also on unspecific binding, was obtained from the donor emission dependent on the concentration. As serum promotes unspecific binding, the overall FRET efficiency of the assay was reduced. We conclude that QDs are good substitutes for acceptors in FRET if combined with slow decay donors like Europium. The investigation of the influence of the serum provides guidance towards improving binding properties of QD assays.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
We present a new analysis of illocutionary forces in dialogue. We analyze them as complex conversational moves involving two dimensions: what Speaker commits herself to and what she calls on Addressee to perform. We start from the analysis of speech acts such as confirmation requests or whimperatives, and extend the analysis to seemingly simple speech acts, such as statements and queries. Then, we show how to integrate our proposal in the framework of the Grammar for Conversation (Ginzburg, to app.), which is adequate for modelling agents' information states and how they get updated.
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
Forum: EU-Diplomatie im Jahre 2020
Sucrose synthase (Susy) is a key enzyme of sucrose metabolism, catalysing the reversible conversion of sucrose and UDP to UDP-glucose and fructose. Therefore, its activity, localization and function have been studied in various plant species. It has been shown that Susy can play a role in supplying energy in companion cells for phloem loading (Fu and Park, 1995), provides substrates for starch synthesis (Zrenner et al., 1995), and supplies UDP-glucose for cell wall synthesis (Haigler et al., 2001). Analysis of the Arabidopsis genome identifies six Susy isoforms. The expression of these isoforms was investigated using promoter-reporter gene constructs (GUS) and real time RT-PCR. Although these isoforms are closely related at the protein level they have radically different spatial and temporal patterns of expression in the plant with no two isoforms showing the same distribution. More than one isoform is expressed in all organs examined. Some of them have high but specific expression in particular organs or developmental stages whilst others are constantly expressed throughout the whole plant and across various stages of development. The in planta function of the six Susy isoforms were explored through analysis of T-DNA insertion mutants and RNAi lines. Plants without the expression of individual isoforms show no differences in growth and development, and are not significantly different from wild type plants in soluble sugars, starch and cellulose contents under all growth conditions investigated. Analysis of T-DNA insertion mutant lacking Sus3 isoform that was exclusively expressed in stomata cells only had a minor influence on guard cell osmoregulation and/or bioenergetics. Although none of the sucrose synthases appear to be essential for normal growth under our standard growth conditions, they may be necessary for growth under stress conditions. Different isoforms of sucrose synthase respond differently to various abiotic stresses. It has been shown that oxygen deprivation up regulates Sus1 and Sus4 and increases total Susy activity. However, the analysis of the plants with reduced expression of both Sus1 and Sus4 revealed no obvious effects on plant performance under oxygen deprivation. Low temperature up regulates Sus1 expression but the loss of this isoform has no effect on the freezing tolerance of non acclimated and cold acclimated plants. These data provide a comprehensive overview of the expression of this gene family which supports some of the previously reported roles for Susy and indicates the involvement of specific isoforms in metabolism and/or signalling.
G protein-coupled receptor (GPCR) genes are large gene families in every animal, sometimes making up to 1-2% of the animal's genome. Of all insect GPCRs, the neurohormone (neuropeptide, protein hormone, biogenic amine) GPCRs are especially important, because they, together with their ligands, occupy a high hierarchic position in the physiology of insects and steer crucial processes such as development, reproduction, and behavior. In this paper, we give a review of our current knowledge on Drosophila melanogaster GPCRs and use this information to annotate the neurohormone GPCR genes present in the recently sequenced genome from the honey bee Apis mellifera. We found 35 neuropeptide receptor genes in the honey bee (44 in Drosophila) and two genes, coding for leucine-rich repeats-containing protein hormone GPCRs (4 in Drosophila). In addition, the honey bee has 19 biogenic amine receptor genes (21 in Drosophila). The larger numbers of neurohormone receptors in Drosophila are probably due to gene duplications that occurred during recent evolution of the fly. Our analyses also yielded the likely ligands for 40 of the 56 honey bee neurohormone GPCRs identified in this study. In addition, we made some interesting observations on neurohormone GPCR evolution and the evolution and co-evolution of their ligands. For neuropeptide and protein hormone GPCRs, there appears to be a general co-evolution between receptors and their ligands. This is in contrast to biogenic amine GPCRs, where evolutionarily unrelated GPCRs often bind to the same biogenic amine, suggesting frequent ligand exchanges ("ligand hops") during GPCR evolution. (c) 2006 Elsevier Ltd. All rights reserved.
A casual look at regional unemployment rates reveals that there are vast differences, which cannot be explained by different institutional settings. Our paper attempts to trace these differences in the labor market performance back to the regions' specialization in products that are more or less advanced in their product cycle. The model we develop shows how individual profit and utility maximization endogenously yields higher employment levels in the beginning. In later phases, however, employment decreases in the presence of process innovation. Our model suggests that the only way to escape from this vicious circle is to specialize in products that are at the beginning of their "economic life". The model is based on an interaction of demand and supply side forces.
We prove the existence of a class of local in time solutions, including static solutions, of the Einstein-Euler system. This result is the relativistic generalisation of a similar result for the Euler-Poisson system obtained by Gamblin [8]. As in his case the initial data of the density do not have compact support but fall off at infinity in an appropriate manner. An essential tool in our approach is the construction and use of weighted Sobolev spaces of fractional order. Moreover, these new spaces allow us to improve the regularity conditions for the solutions of evolution equations. The details of this construction, the properties of these spaces and results on elliptic and hyperbolic equations will be presented in a forthcoming article.
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
The main claim of this paper is that the minimalist framework and optimality theory adopt more or less the same architecture of grammar: both assume that a generator defines a set S of potentially well-formed expressions that can be generated on the basis of a given input, and that there is an evaluator that selects the expressions from S that are actually grammatical in a given language L. The paper therefore proposes a model of grammar in which the strengths of the two frameworks are combined: more specifically, it is argued that the computational system of human language CHL from MP creates a set S of potentially well-formed expressions, and that these are subsequently evaluated in an optimality theoretic fashion.
An increasing number of applications requires user interfaces that facilitate the handling of large geodata sets. Using virtual 3D city models, complex geospatial information can be communicated visually in an intuitive way. Therefore, real-time visualization of virtual 3D city models represents a key functionality for interactive exploration, presentation, analysis, and manipulation of geospatial data. This thesis concentrates on the development and implementation of concepts and techniques for real-time city model visualization. It discusses rendering algorithms as well as complementary modeling concepts and interaction techniques. Particularly, the work introduces a new real-time rendering technique to handle city models of high complexity concerning texture size and number of textures. Such models are difficult to handle by current technology, primarily due to two problems: - Limited texture memory: The amount of simultaneously usable texture data is limited by the memory of the graphics hardware. - Limited number of textures: Using several thousand different textures simultaneously causes significant performance problems due to texture switch operations during rendering. The multiresolution texture atlases approach, introduced in this thesis, overcomes both problems. During rendering, it permanently maintains a small set of textures that are sufficient for the current view and the screen resolution available. The efficiency of multiresolution texture atlases is evaluated in performance tests. To summarize, the results demonstrate that the following goals have been achieved: - Real-time rendering becomes possible for 3D scenes whose amount of texture data exceeds the main memory capacity. - Overhead due to texture switches is kept permanently low, so that the number of different textures has no significant effect on the rendering frame rate. Furthermore, this thesis introduces two new approaches for real-time city model visualization that use textures as core visualization elements: - An approach for visualization of thematic information. - An approach for illustrative visualization of 3D city models. Both techniques demonstrate that multiresolution texture atlases provide a basic functionality for the development of new applications and systems in the domain of city model visualization.
Our work goes in two directions. At first we want to transfer definitions, concepts and results of the theory of hyperidentities and solid varieties from the total to the partial case. (1) We prove that the operators chi^A_RNF and chi^E_RNF are only monotone and additive and we show that the sets of all fixed points of these operators are characterized only by three instead of four equivalent conditions for the case of closure operators. (2) We prove that V is n − SF-solid iff clone^SF V is free with respect to itself, freely generated by the independent set {[fi(x_1, . . . , x_n)]Id^SF_n V | i \in I}. (3) We prove that if V is n-fluid and ~V |P(V ) =~V −iso |P(V ) then V is kunsolid for k >= n (where P(V ) is the set of all V -proper hypersubstitutions of type \tau ). (4) We prove that a strong M-hyperquasi-equational theory is characterized by four equivalent conditions. The second direction of our work is to follow ideas which are typical for the partial case. (1) We characterize all minimal partial clones which are strongly solidifyable. (2)We define the operator Chi^A_Ph where Ph is a monoid of regular partial hypersubstitutions.Using this concept, we define the concept of a Phyp_R(\tau )-solid strong regular variety of partial algebras and we prove that a PHyp_R(\tau )-solid strong regular variety satisfies four equivalent conditions.
This study introduces a method for multiparallel analysis of small organic compounds in the unicellular green alga Chlamydomonas reinhardtii, one of the premier model organisms in cell biology. The comprehensive study of the changes of metabolite composition, or metabolomics, in response to environmental, genetic or developmental signals is an important complement of other functional genomic techniques in the effort to develop an understanding of how genes, proteins and metabolites are all integrated into a seamless and dynamic network to sustain cellular functions. The sample preparation protocol was optimized to quickly inactivate enzymatic activity, achieve maximum extraction capacity and process large sample quantities. As a result of the rapid sampling, extraction and analysis by gas chromatography coupled to time-of-flight mass spectrometry (GC-TOF) more than 800 analytes from a single sample can be measured, of which over a 100 could be positively identified. As part of the analysis of GC-TOF raw data, aliquot ratio analysis to systematically remove artifact signals and tools for the use of principal component analysis (PCA) on metabolomic datasets are proposed. Cells subjected to nitrogen (N), phosphorus (P), sulfur (S) or iron (Fe) depleted growth conditions develop highly distinctive metabolite profiles with metabolites implicated in many different processes being affected in their concentration during adaptation to nutrient deprivation. Metabolite profiling allowed characterization of both specific and general responses to nutrient deprivation at the metabolite level. Modulation of the substrates for N-assimilation and the oxidative pentose phosphate pathway indicated a priority for maintaining the capability for immediate activation of N assimilation even under conditions of decreased metabolic activity and arrested growth, while the rise in 4-hydroxyproline in S deprived cells could be related to enhanced degradation of proteins of the cell wall. The adaptation to sulfur deficiency was analyzed with greater temporal resolution and responses of wild-type cells were compared with mutant cells deficient in SAC1, an important regulator of the sulfur deficiency response. Whereas concurrent metabolite depletion and accumulation occurs during adaptation to S deprivation in wild-type cells, the sac1 mutant strain is characterized by a massive incapability to sustain many processes that normally lead to transient or permanent accumulation of the levels of certain metabolites or recovery of metabolite levels after initial down-regulation. For most of the steps in arginine biosynthesis in Chlamydomonas mutants have been isolated that are deficient in the respective enzyme activities. Three strains deficient in the activities of N-acetylglutamate-5-phosphate reductase (arg1), N2 acetylornithine-aminotransferase (arg9), and argininosuccinate lyase (arg2), respectively, were analyzed with regard to activation of endogenous arginine biosynthesis after withdrawal of externally supplied arginine. Enzymatic blocks in the arginine biosynthetic pathway could be characterized by precursor accumulation, like the amassment of argininosuccinate in arg2 cells, and depletion of intermediates occurring downstream of the enzymatic block, e.g. N2-acetylornithine, ornithine, and argininosuccinate depletion in arg9 cells. The unexpected finding of substantial levels of the arginine pathway intermediates N-acetylornithine, citrulline, and argininosuccinate downstream the enzymatic block in arg1 cells provided an explanation for the residual growth capacity of these cells in the absence of external arginine sources. The presence of these compounds, together with the unusual accumulation of N-Acetylglutamate, the first intermediate that commits the glutamate backbone to ornithine and arginine biosynthesis, in arg1 cells suggests that alternative pathways, possibly involving the activity of ornithine aminotransferase, may be active when the default reaction sequence to produce ornithine via acetylation of glutamate is disabled.
In view of the importance of charge storage in polymer electrets for electromechanical transducer applications, the aim of this work is to contribute to the understanding of the charge-retention mechanisms. Furthermore, we will try to explain how the long-term storage of charge carriers in polymeric electrets works and to identify the probable trap sites. Charge trapping and de-trapping processes were investigated in order to obtain evidence of the trap sites in polymeric electrets. The charge de-trapping behavior of two particular polymer electrets was studied by means of thermal and optical techniques. In order to obtain evidence of trapping or de-trapping, charge and dipole profiles in the thickness direction were also monitored. In this work, the study was performed on polyethylene terephthalate (PETP) and on cyclic-olefin copolymers (COCs). PETP is a photo-electret and contains a net dipole moment that is located in the carbonyl group (C = O). The electret behavior of PETP arises from both the dipole orientation and the charge storage. In contrast to PETP, COCs are not photo-electrets and do not exhibit a net dipole moment. The electret behavior of COCs arises from the storage of charges only. COC samples were doped with dyes in order to probe their internal electric field. COCs show shallow charge traps at 0.6 and 0.11 eV, characteristic for thermally activated processes. In addition, deep charge traps are present at 4 eV, characteristic for optically stimulated processes. PETP films exhibit a photo-current transient with a maximum that depends on the temperature with an activation energy of 0.106 eV. The pair thermalization length (rc) calculated from this activation energy for the photo-carrier generation in PETP was estimated to be approx. 4.5 nm. The generated photo-charge carriers can recombine, interact with the trapped charge, escape through the electrodes or occupy an empty trap. PETP possesses a small quasi-static pyroelectric coefficient (QPC): ~0.6 nC/(m²K) for unpoled samples, ~60 nC/(m²K) for poled samples and ~60 nC/(m²K) for unpoled samples under an electric bias (E ~10 V/µm). When stored charges generate an internal electric field of approx. 10 V/µm, they are able to induce a QPC comparable to that of the oriented dipoles. Moreover, we observe charge-dipole interaction. Since the raw data of the QPC-experiments on PETP samples is noisy, a numerical Fourier-filtering procedure was applied. Simulations show that the data analysis is reliable when the noise level is up to 3 times larger than the calculated pyroelectric current for the QPC. PETP films revealed shallow traps at approx. 0.36 eV during thermally-stimulated current measurements. These energy traps are associated with molecular dipole relaxations (C = O). On the other hand, photo-activated measurements yield deep charge traps at 4.1 and 5.2 eV. The observed wavelengths belong to the transitions in PETP that are analogous to the π - π* benzene transitions. The observed charge de-trapping selectivity in the photocharge decay indicates that the charge detrapping is from a direct photon-charge interaction. Additionally, the charge de-trapping can be facilitated by photo-exciton generation and the interaction of the photo-excitons with trapped charge carriers. These results indicate that the benzene rings (C6H4) and the dipolar groups (C = O) can stabilize and share an extra charge carrier in a chemical resonance. In this way, this charge could be de-trapped in connection with the photo-transitions of the benzene ring and with the dipole relaxations. The thermally-activated charge release shows a difference in the trap depth to its optical counterpart. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. That is, the processes of charge detrapping from shallow traps are related to secondary forces. The processes of charge de-trapping from deep traps are related to primary forces. Furthermore, the presence of deep trap levels causes the stability of the charge for long periods of time.
Forum: EU-Diplomatie im Jahre 2020
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
It is shown that an elliptic scattering operator A on a compact manifold with boundary with operator valued coefficients in the morphisms of a bundle of Banach spaces of class (HT ) and Pisier’s property (α) has maximal regularity (up to a spectral shift), provided that the spectrum of the principal symbol of A on the scattering cotangent bundle avoids the right half-plane. This is accomplished by representing the resolvent in terms of pseudodifferential operators with R-bounded symbols, yielding by an iteration argument the R-boundedness of λ(A − λ)−1 in R(λ)≥ τ for some τ ∈ IR. To this end, elements of a symbolic and operator calculus of pseudodifferential operators with R-bounded symbols are introduced. The significance of this method for proving maximal regularity results for partial differential operators is underscored by considering also a more elementary situation of anisotropic elliptic operators on Rd with operator valued coefficients.
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
When Galactic microlensing events of stars are observed, one usually measures a symmetric light curve corresponding to a single lens, or an asymmetric light curve, often with caustic crossings, in the case of a binary lens system. In principle, the fraction of binary stars at a certain separation range can be estimated based on the number of measured microlensing events. However, a binary system may produce a light curve which can be fitted well as a single lens light curve, in particullary if the data sampling is poor and the errorbars are large. We investigate what fraction of microlensing events produced by binary stars for different separations may be well fitted by and hence misinterpreted as single lens events for various observational conditions. We find that this fraction strongly depends on the separation of the binary components, reaching its minimum at between 0.6 and 1.0 Einstein radius, where it is still of the order of 5% The Einstein radius is corresponding to few A.U. for typical Galactic microlensing scenarios. The rate for misinterpretation is higher for short microlensing events lasting up to few months and events with smaller maximum amplification. For fixed separation it increases for binaries with more extreme mass ratios. Problem of degeneracy in photometric light curve solution between binary lens and binary source microlensing events was studied on simulated data, and data observed by the PLANET collaboration. The fitting code BISCO using the PIKAIA genetic algorithm optimizing routine was written for optimizing binary-source microlensing light curves observed at different sites, in I, R and V photometric bands. Tests on simulated microlensing light curves show that BISCO is successful in finding the solution to a binary-source event in a very wide parameter space. Flux ratio method is suggested in this work for breaking degeneracy between binary-lens and binary-source photometric light curves. Models show that only a few additional data points in photometric V band, together with a full light curve in I band, will enable breaking the degeneracy. Very good data quality and dense data sampling, combined with accurate binary lens and binary source modeling, yielded the discovery of the lowest-mass planet discovered outside of the Solar System so far, OGLE-2005-BLG-390Lb, having only 5.5 Earth masses. This was the first observed microlensing event in which the degeneracy between a planetary binary-lens and an extreme flux ratio binary-source model has been successfully broken. For events OGLE-2003-BLG-222 and OGLE-2004-BLG-347, the degeneracy was encountered despite of very dense data sampling. From light curve modeling and stellar evolution theory, there was a slight preference to explain OGLE-2003-BLG-222 as a binary source event, and OGLE-2004-BLG-347 as a binary lens event. However, without spectra, this degeneracy cannot be fully broken. No planet was found so far around a white dwarf, though it is believed that Jovian planets should survive the late stages of stellar evolution, and that white dwarfs will retain planetary systems in wide orbits. We want to perform high precision astrometric observations of nearby white dwarfs in wide binary systems with red dwarfs in order to find planets around white dwarfs. We selected a sample of observing targets (WD-RD binary systems, not published yet), which can possibly have planets around the WD component, and modeled synthetic astrometric orbits which can be observed for these targets using existing and future astrometric facilities. Modeling was performed for the astrometric accuracy of 0.01, 0.1, and 1.0 mas, separation between WD and planet of 3 and 5 A.U., binary system separation of 30 A.U., planet masses of 10 Earth masses, 1 and 10 Jupiter masses, WD mass of 0.5M and 1.0 Solar masses, and distances to the system of 10, 20 and 30 pc. It was found that the PRIMA facility at the VLTI will be able to detect planets around white dwarfs once it is operating, by measuring the astrometric wobble of the WD due to a planet companion, down to 1 Jupiter mass. We show for the simulated observations that it is possible to model the orbits and find the parameters describing the potential planetary systems.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
The goal of a Brain-Computer Interface (BCI) consists of the development of a unidirectional interface between a human and a computer to allow control of a device only via brain signals. While the BCI systems of almost all other groups require the user to be trained over several weeks or even months, the group of Prof. Dr. Klaus-Robert Müller in Berlin and Potsdam, which I belong to, was one of the first research groups in this field which used machine learning techniques on a large scale. The adaptivity of the processing system to the individual brain patterns of the subject confers huge advantages for the user. Thus BCI research is considered a hot topic in machine learning and computer science. It requires interdisciplinary cooperation between disparate fields such as neuroscience, since only by combining machine learning and signal processing techniques based on neurophysiological knowledge will the largest progress be made. In this work I particularly deal with my part of this project, which lies mainly in the area of computer science. I have considered the following three main points: <b>Establishing a performance measure based on information theory:</b> I have critically illuminated the assumptions of Shannon's information transfer rate for application in a BCI context. By establishing suitable coding strategies I was able to show that this theoretical measure approximates quite well to what is practically achieveable. <b>Transfer and development of suitable signal processing and machine learning techniques:</b> One substantial component of my work was to develop several machine learning and signal processing algorithms to improve the efficiency of a BCI. Based on the neurophysiological knowledge that several independent EEG features can be observed for some mental states, I have developed a method for combining different and maybe independent features which improved performance. In some cases the performance of the combination algorithm outperforms the best single performance by more than 50 %. Furthermore, I have theoretically and practically addressed via the development of suitable algorithms the question of the optimal number of classes which should be used for a BCI. It transpired that with BCI performances reported so far, three or four different mental states are optimal. For another extension I have combined ideas from signal processing with those of machine learning since a high gain can be achieved if the temporal filtering, i.e., the choice of frequency bands, is automatically adapted to each subject individually. <b>Implementation of the Berlin brain computer interface and realization of suitable experiments:</b> Finally a further substantial component of my work was to realize an online BCI system which includes the developed methods, but is also flexible enough to allow the simple realization of new algorithms and ideas. So far, bitrates of up to 40 bits per minute have been achieved with this system by absolutely untrained users which, compared to results of other groups, is highly successful.
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
When top sports performers fail or “choke” under pressure, everyone asks: why? Research has identified a number of conditions (e.g. an audience) that elicit choking and that moderate (e.g. trait-anxiety) pressure – performance relation. Furthermore, mediating processes have been investigated. For example, explicit monitoring theories link performance failure under psychological stress to an increase in attention paid to a skill and its step-by-step execution (Beilock & Carr, 2001). Many studies have provided support for these ideas. However, so far only overt performance measures have been investigated which do not allow more thorough analyses of processes or performance strategies. But also a theoretical framework has been missing, that could (a) explain the effects of explicit monitoring on skill execution and that (b) makes predictions as to what is being monitored during execution. Consequently in this study, the nodalpoint hypothesis of motor control (Hossner & Ehrlenspiel, 2006) was taken to predict movement changes on three levels of analysis at certain “nodalpoints” within the movement sequence. Performance in two different laboratory tasks was assessed with respect to overt performance (the observable result, for example accuracy in the target), covert performance (description of movement execution, for example the acceleration of body segements) and task exploitation (the utilization of task properties such as covariation). A fake competition (see Beilock & Carr, 2002) was used to invoke pressure. In study 1 a ball bouncing task in a virtual-reality set-up was chosen. Previous studies (de Rugy, Wei, Müller, & Sternad, 2003) have shown that learners are usually able to “passively” exploit the dynamical stability of the system. According to explicit monitoring theories, choking should be expected either if the task itself evokes an “active control” (Experiment 1) or if learners are provided with explicit instructions (Experiment 2). In both experiments, participants first went through a practice phase on day 1. On day 2, following the Baseline Test participants were divided into a High-Stress or No-Stress Group for the final Performance Test. The High-Stress Group entered a fake competition. Overt performance was measured by the Absolute Error (AE) of ball amplitudes from target height; covert performance was measured by Period Modulation between successive hits and task exploitation was measured by Acceleration (AC) at ball-racket impact and Covariation (COV) of impact parameters. To evoke active control in Exp. 1 (N=20), perturbations to the ball flight were introduced. In Exp. 2 (N=39) half of the participants received explicit skill-focused instructions during learning. For overt performance, results generally show an interaction between Stress Group and Test, with better performance (i. e. lower AE) for the High-Stress group in the final Performance Test. This effect is also independent of the Instructions that participants had received during learning (Exp. 2). Similar effects were found for COV but not for AC. In study 2 a visuomotor tracking task in which participants had to pursuit a target cross that was moving on an invisible curve. This curve consisted of 3 segments of 6 turning points sequentially ordered around the x-axis. Participants learned two short movement sequences which were then concatenated to form a single sequence. It was expected that under pressure, this sequence should “fall apart” at the point of concatenation. Overt Performance was assessed by the Root Mean Square Error between target and pursuit cross as well as the Absolute Error at the turning points, covert performance was measured by the Latency from target to pursuit turning and task exploitation was measured by the temporal covariation between successive intervals between turning points. Experiment 3 (intraindividual variation) as well as Experiment 4 (interindividual variation) show performance enhancement in the pressure situation on the overt level with matching results on covert and task exploitation level. Thus, contrary to previous studies, no choking under pressure was found in any of the experiments. This may be interpreted as a failure in the experimental manipulation. But certainly also important characteristics of the task are highlighted. Choking should occur in tasks where performers do not have the time to use action or thought control strategies, that are more relevant to their “self” and that are discrete in nature.
In this paper we compare the behaviour of adverbs of frequency (de Swart 1993) like usually with the behaviour of adverbs of quantity like for the most part in sentences that contain plural definites. We show that sentences containing the former type of Q-adverb evidence that Quantificational Variability Effects (Berman 1991) come about as an indirect effect of quantification over situations: in order for quantificational variability readings to arise, these sentences have to obey two newly observed constraints that clearly set them apart from sentences containing corresponding quantificational DPs, and that can plausibly be explained under the assumption that quantification over (the atomic parts of) complex situations is involved. Concerning sentences with the latter type of Q-adverb, on the other hand, such evidence is lacking: with respect to the constraints just mentioned, they behave like sentences that contain corresponding quantificational DPs. We take this as evidence that Q-adverbs like for the most part do not quantify over the atomic parts of sum eventualities in the cases under discussion (as claimed by Nakanishi and Romero (2004)), but rather over the atomic parts of the respective sum individuals.
Optical methods play an important role in process analytical technologies (PAT). Four examples of optical process and quality sensing (OPQS) are presented, which are based on three important experimental techniques: near-infrared absorption, luminescence quenching, and a novel method, photon density wave (PDW) spectroscopy. These are used to evaluate four process and quality parameters related to beer brewing and polyurethane (PU) foaming processes: the ethanol content and the oxygen (O2) content in beer, the biomass in a bioreactor, and the cellular structures of PU foam produced in a pilot production plant.
We give a construction of an eigenstate for a non-critical level of the Hamiltonian function, and investigate the contribution of Morse critical points to the spectral decomposition. We compare the rigorous result with the series obtained by a perturbation theory. As an example the relation to the spectral asymptotics is discussed.
The ultimate aim of this study is to better understand the relevance of weak electricity in the adaptive radiation of the African mormyrid fish. The chosen model taxon, the genus Campylomormyrus, exhibits a wide diversity of electric organ discharge (EOD) waveform types. Their EOD is age, sex, and species specific and is an important character for discriminating among species that are otherwise cryptic. After having established a complementary set of molecular markers, I examined the radiation of Campylomormyrus by a combined approach of molecular data (sequence data from the mitochondrial cytochrome b and the nuclear S7 ribosomal protein gene, as well as 18 microsatellite loci, especially developed for the genus Campylomormyrus), observation of ontogeny and diversification of EOD waveform, and morphometric analysis of relevant morphological traits. I built up the first convincing phylogenetic hypothesis for the genus Campylomormyrus. Taking advantage of microsatellite data, the identified phylogenetic clades proved to be reproductively isolated biological species. This way I detected at least six species occurring in sympatry near Brazzaville/Kinshasa (Congo Basin). By combining molecular data and EOD analyses, I could show that there are three cryptic species, characterised by their own adult EOD types, hidden under a common juvenile EOD form. In addition, I confirmed that adult male EOD is species-specific and is more different among closely related species than among more distantly related ones. This result and the observation that the EOD changes with maturity suggest its function as a reproductive isolation mechanism. As a result of my morphometric shape analysis, I could assign species types to the identified reproductively isolated groups to produce a sound taxonomy of the group. Besides this, I could also identify morphological traits relevant for the divergences between the identified species. Among them, the variations I found in the shape of the trunk-like snout, suggest the presence of different trophic specializations; therefore, this trait might have been involved in the ecological radiation of the group. In conclusion, I provided a convincing scenario envisioning an adaptive radiation of weakly electric fish triggered by sexual selection via assortative mating due to differences in EOD characteristics, but caused by a divergent selection of morphological traits correlated with the feeding ecology.
The first goal of the present work focuses on the need for different rationing methods of the The Global Change and Financial Transition (GFT) work- ing group at the Potsdam Institute for Climate Impact Research (PIK): I provide a toolbox which contains a variety of rationing methods to be ap- plied to micro-economic disequilibrium models of the lagom model family. This toolbox consists of well known rationing methods, and of rationing methods provided specifically for lagom. To ensure an easy application the toolbox is constructed in modular fashion. The second goal of the present work is to present a micro-economic labour market where heterogenous labour suppliers experience consecu- tive job opportunities and need to decide whether to apply for employ- ment. The labour suppliers are heterogenous with respect to their qualifi- cations and their beliefs about the application behaviour of their competi- tors. They learn simultaneously – in Bayesian fashion – about their individ- ual perceived probability to obtain employment conditional on application (PPE) by observing each others’ application behaviour over a cycle of job opportunities.
Content: 1. Objectives 2. Sociohistorical Background 2.1. The Cornish 2.2. The Welsh 2.3. The Bretons 3. Characteristics of the Brythonic Naming System 3.1. Type 1 Names: Patronymic Lineage 3.2. Type 2 Names: Geographic Origin or Place of Residence 3.3. Type 3 Names: Occupational Activities (Generally Linked to Peasantry) 3.4. Type 4 Names: Physical Characteristics, Moral Flaws 3.5. Type 5 Names: Epithets Relating to Character, Titles of Nobility, etc. 3.6. Epithets Containing References to Victory, War, Warriors, Weapons 3.7. Epithets Containing References to Courage, Strength, Impetuousness and War-like Animals 3.8. Epithets Containing References to Honorific Titles, Noble Lineage, Social Status and Aristocratic Values 4. Summary
Contents: Part I: Symplectic Geometry Chapter 1: Symplectic Spaces and Lagrangian Planes Chapter 2: The Symplectic Group Chapter 3: Multi-Oriented Symplectic Geometry Chapter 4: Intersection Indices in Lag(n) and Sp(n) Part II: Heisenberg Group, Weyl Calculus, and Metaplectic Representation Chapter 5: Lagrangian Manifolds and Quantization Chapter 6: Heisenberg Group and Weyl Operators Chapter 7: The Metaplectic Group Part III: Quantum Mechanics in Phase Space Chapter 8: The Uncertainty Principle Chapter 9: The Density Operator Chapter 10: A Phase Space Weyl Calculus
Do institutions matter?
(2006)
Contens 1 Introduction 2 Institutions and the Institutional Change 2.1 Institutions and Theoretical Concepts in Economics 2.2 Path Dependence 2.3 Inconsistence of Institutional Development 2.4 Determinants of Effectiveness 2.5 Efficiency of New Institutions 3 What is “Competition Policy”? 4 The Competition Policy in Russia as an Institution 4.1 Establishment of the Competition Policy as an Institution 4.2 Market Structure and Competition Policy 4.3 Measures of Competition Policy 4.3.1 Prohibition of Competition Restrictive Agreements or Concerted Actions 4.3.2 Abuse of Dominance 4.3.3 Merger Control 4.3.4 Restrictive Action to Competition of Administrative Bodies 4.4 Violations of the Competition Law 4.5 Problems of the Russian Competition Policy 5 Which Mistakes Russia has made with the Implementation of theCompetition Policy? 6 Is a Lacking Effectiveness of Transplanted Institutions Inevitable? 7 Concluding Remarks
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
If we want to compare the explanatory and descriptive adequacy of the MP and OT, the original definitions by Chomsky (1964) are or little direct use. However, a relativized version of both notions can be defined, which can be used to express a number of parallels between the study of individual I-languages and the language faculty. In any version of explanatory and descriptive adequacy, the two notions derive from the research programme and can only be achieved together. They can therefore not be used to characterize the difference in orientation between OT and the MP. Even if ‘OT’ is restricted to a particular theory in Chomskyan linguistics (to the exclusion of, for instance, its use in LFG), it cannot be said to be stronger in descriptive adequacy than in explanatory adequacy in the technical sense of these terms.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
The paper presents an in-depth study of focus marking in Gùrùntùm, a West Chadic language spoken in Bauchi Province of Northern Nigeria. Focus in Gùrùntùm is marked morphologically by means of a focus marker a, which typically precedes the focus constituent. Even though the morphological focus-marking system of Gùrùntùm allows for a lot of fine-grained distinctions in information structure (IS) in principle, the language is not entirely free of focus ambiguities that arise as the result of conflicting IS- and syntactic requirements that govern the placement of focus markers. We show that morphological focus marking with a applies across different types of focus, such as newinformation, contrastive, selective and corrective focus, and that a does not have a second function as a perfectivity marker, as is assumed in the literature. In contrast, we show at the end of the paper that a can also function as a foregrounding device at the level of discourse structure.
We study elliptic boundary value problems in a wedge with additional edge conditions of trace and potential type. We compute the (difference of the) number of such conditions in terms of the Fredholm index of the principal edge symbol. The task will be reduced to the case of special opening angles, together with a homotopy argument.
Soil seed banks near rubbing trees indicate dispersal of plant species into forests by wild boar
(2006)
Current knowledge about processes that generate long-distance dispersal of plants is still limited despite its importance for persistence of populations and colonization of new potential habitats. Today wild large mammals are presumed to be important vectors for long-distance transport of diaspores within and between European temperate forest patches, and in particular wild boars recently came into focus. Here we use a specific habit of wild boar, i.e. wallowing in mud and subsequent rubbing against trees, to evaluate epizoic dispersal of vascular plant diaspores. We present soil seed bank data from 27 rubbing trees versus 27 control trees from seven forest areas in Germany. The mean number of viable seeds and the plant species number were higher in soil samples near rubbing trees compared with control trees. Ten of the 20 most frequent species were more frequent, and many species exclusively appeared in the soil samples near rubbing trees. The large number of plant species and seeds – approximated > 1000 per tree – in the soils near rubbing trees is difficult to explain unless the majority were dispersed by wild boar. Hooked and bristly diaspores, i.e. those adapted to epizoochory, were more frequent, above that many species with unspecialised diaspores occurred exclusively near rubbing trees. Different to plant species closely tied to forest species which occur both in forest and open vegetation, and non-forest species were more frequent near rubbing trees compared with controls. These findings are consistent with previous studies on diaspore loads in the coats and hooves of shot wild boars. However, our method allows to identify the transport of diaspores from the open landscape into forest stands where they might especially emerge after disturbance, and a clustered distribution of epizoochorically dispersed seeds. Moreover, accumulation of seeds of wetness indicators near rubbing trees demonstrates directed dispersal of plant species inhabiting wet places between remote wallows.
Quantum dots (QDs) are common as luminescing markers for imaging in biological applications because their optical properties seem to be inert against their surrounding solvent. This, together with broad and strong absorption bands and intense, sharp tuneable luminescence bands, makes them interesting candidates for methods utilizing Forster Resonance Energy Transfer (FRET), e. g. for sensitive homogeneous fluoroimmunoassays (FIA). In this work we demonstrate energy transfer from Eu3+-trisbipyridin (Eu-TBP) donors to CdSe-ZnS-QD acceptors in solutions with and without serum. The QDs are commercially available CdSe-ZnS core-shell particles emitting at 655 nm (QD655). The FRET system was achieved by the binding of the streptavidin conjugated donors with the biotin conjugated acceptors. After excitation of Eu-TBP and as result of the energy transfer, the luminescence of the QD655 acceptors also showed lengthened decay times like the donors. The energy transfer efficiency, as calculated from the decay times of the bound and the unbound components, amounted to 37%. The Forster-radius, estimated from the absorption and emission bands, was ca. 77Å. The effective binding ratio, which not only depends on the ratio of binding pairs but also on unspecific binding, was obtained from the donor emission dependent on the concentration. As serum promotes unspecific binding, the overall FRET efficiency of the assay was reduced. We conclude that QDs are good substitutes for acceptors in FRET if combined with slow decay donors like Europium. The investigation of the influence of the serum provides guidance towards improving binding properties of QD assays.
Förster Resonance Energy Transfer (FRET) plays an important role for biochemical applications such as DNA sequencing, intracellular protein-protein interactions, molecular binding studies, in vitro diagnostics and many others. For qualitative and quantitative analysis, FRET systems are usually assembled through molecular recognition of biomolecules conjugated with donor and acceptor luminophores. Lanthanide (Ln) complexes, as well as semiconductor quantum dot nanocrystals (QD), possess unique photophysical properties that make them especially suitable for applied FRET. In this work the possibility of using QD as very efficient FRET acceptors in combination with Ln complexes as donors in biochemical systems is demonstrated. The necessary theoretical and practical background of FRET, Ln complexes, QD and the applied biochemical models is outlined. In addition, scientific as well as commercial applications are presented. FRET can be used to measure structural changes or dynamics at distances ranging from approximately 1 to 10 nm. The very strong and well characterized binding process between streptavidin (Strep) and biotin (Biot) is used as a biomolecular model system. A FRET system is established by Strep conjugation with the Ln complexes and QD biotinylation. Three Ln complexes (one with Tb3+ and two with Eu3+ as central ion) are used as FRET donors. Besides the QD two further acceptors, the luminescent crosslinked protein allophycocyanin (APC) and a commercial fluorescence dye (DY633), are investigated for direct comparison. FRET is demonstrated for all donor-acceptor pairs by acceptor emission sensitization and a more than 1000-fold increase of the luminescence decay time in the case of QD reaching the hundred microsecond regime. Detailed photophysical characterization of donors and acceptors permits analysis of the bioconjugates and calculation of the FRET parameters. Extremely large Förster radii of more than 100 Å are achieved for QD as acceptors, considerably larger than for APC and DY633 (ca. 80 and 60 Å). Special attention is paid to interactions with different additives in aqueous solutions, namely borate buffer, bovine serum albumin (BSA), sodium azide and potassium fluoride (KF). A more than 10-fold limit of detection (LOD) decrease compared to the extensively characterized and frequently used donor-acceptor pair of Europium tris(bipyridine) (Eu-TBP) and APC is demonstrated for the FRET system, consisting of the Tb complex and QD. A sub-picomolar LOD for QD is achieved with this system in azide free borate buffer (pH 8.3) containing 2 % BSA and 0.5 M KF. In order to transfer the Strep-Biot model system to a real-life in vitro diagnostic application, two kinds of imunnoassays are investigated using human chorionic gonadotropin (HCG) as analyte. HCG itself, as well as two monoclonal anti-HCG mouse-IgG (immunoglobulin G) antibodies are labeled with the Tb complex and QD, respectively. Although no sufficient evidence for FRET can be found for a sandwich assay, FRET becomes obvious in a direct HCG-IgG assay showing the feasibility of using the Ln-QD donor-acceptor pair as highly sensitive analytical tool for in vitro diagnostics.
Semiclassical asymptotics for the scattering amplitude in the presence of focal points at infinity
(2006)
We consider scattering in $\R^n$, $n\ge 2$, described by the Schr\"odinger operator $P(h)=-h^2\Delta+V$, where $V$ is a short-range potential. With the aid of Maslov theory, we give a geometrical formula for the semiclassical asymptotics as $h\to 0$ of the scattering amplitude $f(\omega_-,\omega_+;\lambda,h)$ $\omega_+\neq\omega_-$) which remains valid in the presence of focal points at infinity (caustics). Crucial for this analysis are precise estimates on the asymptotics of the classical phase trajectories and the relationship between caustics in euclidean phase space and caustics at infinity.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in con-ventional computer laboratories poses the problem of immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. This report introduces the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibilities to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by the virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present security management solutions to prevent misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This report demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
With increasing number of applications in Internet and mobile environments, distributed software systems are demanded to be more powerful and flexible, especially in terms of dynamism and security. This dissertation describes my work concerning three aspects: dynamic reconfiguration of component software, security control on middleware applications, and web services dynamic composition. Firstly, I proposed a technology named Routing Based Workflow (RBW) to model the execution and management of collaborative components and realize temporary binding for component instances. The temporary binding means component instances are temporarily loaded into a created execution environment to execute their functions, and then are released to their repository after executions. The temporary binding allows to create an idle execution environment for all collaborative components, on which the change operations can be immediately carried out. The changes on execution environment will result in a new collaboration of all involved components, and also greatly simplifies the classical issues arising from dynamic changes, such as consistency preserving etc. To demonstrate the feasibility of RBW, I created a dynamic secure middleware system - the Smart Data Server Version 3.0 (SDS3). In SDS3, an open source implementation of CORBA is adopted and modified as the communication infrastructure, and three secure components managed by RBW, are created to enhance the security on the access of deployed applications. SDS3 offers multi-level security control on its applications from strategy control to application-specific detail control. For the management by RBW, the strategy control of SDS3 applications could be dynamically changed by reorganizing the collaboration of the three secure components. In addition, I created the Dynamic Services Composer (DSC) based on Apache open source projects, Apache Axis and WSIF. In DSC, RBW is employed to model the interaction and collaboration of web services and to enable the dynamic changes on the flow structure of web services. Finally, overall performance tests were made to evaluate the efficiency of the developed RBW and SDS3. The results demonstrated that temporary binding of component instances makes slight impacts on the execution efficiency of components, and the blackout time arising from dynamic changes can be extremely reduced in any applications.
Recent work has shown that English-learning 18-month-olds can detect the relationship between discontinuous morphemes such as is and -ing in Grandma is always running (Gomez, 2002; Santelmann & Jusczyk, 1998) but only at a maximum of 3 intervening syllables. In this article we examine the tracking of discontinuous dependencies in children acquiring German. Due to freer word order, German allows for greater distances between dependent elements and a greater syntactic variety of the intervening elements than English does. The aim of this study was to investigate whether factors other than distance may influence the child’s capacity to recognize discontinuous elements. Our findings provide evidence that children’s recognition capacities are affected not only by distance but also by their ability to linguistically analyze the material intervening between the dependent elements. We speculate that this result supports the existence of processing mechanisms that reduce a discontinuous relation to a local one based on subcategorization relations.
Recent research has shown that the early lexical representations children establish in their second year of life already seem to be phonologically detailed enough to allow differentiation from very similar forms. In contrast to these findings children with specific language impairment show problems in discriminating phonologically similar word forms up to school age. In our study we investigated the question whether there would be differences in the processing of phonological details in normally developing and in children with low language performance in the second year of life. This was done by a retrospective study in which in the processing of phonological details was tested by a preferential looking experiment when the children were 19 months old. At the age of 30 months children were tested with a standardized German test of language comprehension and production (SETK2). The preferential looking data at 19 months revealed an opposite reaction pattern for the two groups: while the children scoring normally in the SETK2 increase their fixations of a pictured object only when it was named with the correct word, children with later low language performance did so only when presented with a phonologically slightly deviant mispronunciation. We suggest that this pattern does not point to a specific deficit in processing phonological information in these children but might be related to an instability of early phonological representations, and/or a generalized problem of information processing as compared to typically developing children.
We establish a new calculus of pseudodifferential operators on a manifold with smooth edges and study ellipticity with extra trace and potential conditions (as well as Green operators) at the edge. In contrast to the known scenario with conditions of that kind in integral form we admit in this paper ‘singular’ trace, potential and Green operators, which are related to the corresponding operators of positive type in Boutet de Monvel’s calculus for boundary value problems.
The primary objective of this work was to develop a laser source for fundamental investigations in the field of laser – materials interactions. In particular it is supposed to facilitate the study of the influence of the temporal energy distribution such as the interaction between adjacent pulses on ablation processes. Therefore, the aim was to design a laser with a highly flexible and easily controllable temporal energy distribution. The laser to meet these demands is an SBS-laser with optional active mode-locking. The nonlinear reflectivity of the SBS-mirror leads to a passive Q-switching and issues ns-pulse bursts with µs spacing. The pulse train parameters such as pulse duration, pulse spacing, pulse energy and number of pulses within a burst can be individually adjusted by tuning the pump parameters and the starting conditions for the laser. Another feature of the SBS-reflection is phase conjugation, which leads to an excellent beam quality thanks to the compensation of phase distortions. Transverse fundamental mode operation and a beam quality better than 1.4 times diffraction limited can be maintained for average output powers of up to 10 W. In addition to the dynamics on a ns-timescale described above, a defined splitting up of each ns-pulse into a train of ps-pulses can be achieved by additional active mode-locking. This twofold temporal focussing of the intensity leads to single pulse energies of up to 2 mJ at pulse durations of approximately 400 ps which corresponds to a pulse peak power of 5 MW. While the pulse duration is of the same order of magnitude as those of other passively Q-switched lasers with simultaneous mode-locking, the pulse energy and pulse peak power exceeds the values of these systems found in the literature by an order of magnitude. To the best of my knowledge the laser presented here is the first implementation of a self-starting mode-locked SBS-laser oscillator. In order to gain a better understanding and control of the transient output of the laser two complementary numerical models were developed. The first is based on laser rate equations which are solved for each laser mode individually while the mode-locking dynamics are calculated from the resultant transient spectrum. The rate equations consider the mean photon densities in the resonator, therefore the propagation of the light inside the resonator is not properly displayed. The second model, in contrast, introduces a spatial resolution of the resonator and hence the propagation inside the resonator can more accurately be considered. Consequently, a mismatch between the loss modulation frequency and the resonator round trip time can be conceived. The model calculates all dynamics in the time domain and therefore the spectral influences such as the Stokes-shift have to be neglected. Both models achieve an excellent reproduction of the ns-dynamics that are generated by the SBS-Q-switch. Separately, each model fails to reproduce all aspects of the ps-dynamics of the SBS-laser in detail. This can be attributed to the complexity of the numerous physical processes involved in this system. But thanks to their complementary nature they provide a very useful tool for investigating the various influences on the dynamics of the mode-locked SBS-laser individually. These aspects can eventually be recomposed to give a complete picture of the mechanisms which govern the output dynamics. Among the aspects under scrutiny were in particular the start resonator quality which determines the starting condition for the SBS-Q-switch, the modulation depth of the AOM and the phonon lifetime as well as the Brillouin-frequency of the SBS-medium. The numerical simulations and the experiments have opened several doors inviting further investigations and promising a potential for further improvement of the experimental results: The results of the simulations in combination with the experimental results which determined the starting conditions for the simulations leave no doubt that the bandwidth generation can primarily be attributed to the SBS-Stokes-shift during the buildup of the Q-switch pulse. For each resonator round trip, bandwidth is generated by shifting a part of the revolving light in frequency. The magnitude of the frequency shift corresponds to the Brillouin-frequency which is a constant of the SBS material and amounts in the case of SF6 to 240 MHz. The modulation of the AOM merely provides an exchange of population between spectrally adjacent modes and therefore diminishes a modulation in the spectrum. By use of a material with a Brillouin-frequency in the GHz range the bandwidth generation can be considerably accelerated thereby shortening the pulse duration. Also, it was demonstrated that yet another nonlinear effect of the SBS can be exploited: If the phonon lifetime is short compared to the resonator round trip time we obtain a modulation in the SBS-reflectivity that supports the modulation of the AOM. The application of an external optical feedback by a conventional mirror turns out to be an alternative to the AOM in synchronizing the longitudinal resonator modes. The interesting feature about this system is that it is ― although highly complex in the physical processes and the temporal output dynamics ― very simple and inexpensive from a technical point of view. No expensive modulators and no control electronics are necessary. Finally, the numerical models constitute a powerful tool for the investigation of emission dynamics of complex laser systems on arbitrary timescales and can also display the spectral evolution of the laser output. In particular it could be demonstrated that differences in the results of the complementary models vanish for systems of lesser complexity.
An account is presented of the focus properties, common ground effect and dialogue behaviour of the accented German discourse marker "doch" and the accented sentence negation "nicht". It is argued that "doch" and "nicht" evoke as a focus alternative the logical complement of the proposition expressed by the sentence in which they occur, and that an analysis in terms of contrastive focus accounts for their effect on the common ground and their function in dialogue.
On null quadrature domains
(2006)
The characterization of null quadrature domains in Rn (n ≥ 3) has been an open problem throughout the past two and a half decades. A substantial contribution was done by Friedman and Sakai [10]; they showed that if the complement is bounded, then null quadrature domains are exactly the complement of ellip- soids. The first result with unbounded complements appeared in [15], there it is assumed the complement is contained in an infinitely cylinder. The aim of this paper is to show the relation between null quadrature domains and Newton's theorem on the gravitational force induced by homogeneous homoeoidal ellipsoids. We also succeed to make progress in the classification problem and we show that if the boundary of null quadrature domain is contained in a strip and the complement satisfies a certain capacity condition at infinity, then it must be a half-space or a complement of a strip. In addition, we present a Phragm¶en-Lindelöf type theorem which seems to be forgotten in the literature.
This article presents an analysis of German nicht...sondern... (contrastive not...but...) which departs from the commonly held view that this construction should be explained by appeal to its alleged corrective function. It will be demonstrated that in nicht A sondern B (not A but B), A and B just behave like stand-alone unmarked answers to a common question Q, and that this property of sondern is presuppositional in character. It is shown that from this general observation many interesting properties of nicht...sondern... follow, among them distributional differences between German 'sondern' and German 'aber' (contrastive but, concessive but), intonational requirements and exhaustivity effects. sondern's presupposition is furthermore argued to be the result of the conventionalization of conversational implicatures.
On the basis of the Dynamic Syntax framework, this paper argues that the production pressures in dialogue determining alignment effects and given versus new informational effects also drive the shift from case-rich free word order systems without clitic pronouns into systems with clitic pronouns with rigid relative ordering. The paper introduces assumptions of Dynamic Syntax, in particular the building up of interpretation through structural underspecification and update, sketches the attendant account of production with close coordination of parsing and production strategies, and shows how what was at the Latin stage a purely pragmatic, production-driven decision about linear ordering becomes encoded in the clitics in theMedieval Spanish system which then through successive steps of routinization yield the modern systems with immediately pre-verbal fixed clitic templates.
Irish standard English
(2006)
Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to previous and next words. Results are based on fixation durations recorded from 222 persons, each reading 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye-mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes.
Uncertainties are pervasive in the Earth System modelling. This is not just due to a lack of knowledge about physical processes but has its seeds in intrinsic, i.e. inevitable and irreducible, uncertainties concerning the process of modelling as well. Therefore, it is indispensable to quantify uncertainty in order to determine, which are robust results under this inherent uncertainty. The central goal of this thesis is to explore how uncertainties map on the properties of interest such as phase space topology and qualitative dynamics of the system. We will address several types of uncertainty and apply methods of dynamical systems theory on a trendsetting field of climate research, i.e. the Indian monsoon. For the systematic analysis concerning the different facets of uncertainty, a box model of the Indian monsoon is investigated, which shows a saddle node bifurcation against those parameters that influence the heat budget of the system and that goes along with a regime shift from a wet to a dry summer monsoon. As some of these parameters are crucially influenced by anthropogenic perturbations, the question is whether the occurrence of this bifurcation is robust against uncertainties in parameters and in the number of considered processes and secondly, whether the bifurcation can be reached under climate change. Results indicate, for example, the robustness of the bifurcation point against all considered parameter uncertainties. The possibility of reaching the critical point under climate change seems rather improbable. A novel method is applied for the analysis of the occurrence and the position of the bifurcation point in the monsoon model against parameter uncertainties. This method combines two standard approaches: a bifurcation analysis with multi-parameter ensemble simulations. As a model-independent and therefore universal procedure, this method allows investigating the uncertainty referring to a bifurcation in a high dimensional parameter space in many other models. With the monsoon model the uncertainty about the external influence of El Niño / Southern Oscillation (ENSO) is determined. There is evidence that ENSO influences the variability of the Indian monsoon, but the underlying physical mechanism is discussed controversially. As a contribution to the debate three different hypotheses are tested of how ENSO and the Indian summer monsoon are linked. In this thesis the coupling through the trade winds is identified as key in linking these two key climate constituents. On the basis of this physical mechanism the observed monsoon rainfall data can be reproduced to a great extent. Moreover, this mechanism can be identified in two general circulation models (GCMs) for the present day situation and for future projections under climate change. Furthermore, uncertainties in the process of coupling models are investigated, where the focus is on a comparison of forced dynamics as opposed to fully coupled dynamics. The former describes a particular type of coupling, where the dynamics from one sub-module is substituted by data. Intrinsic uncertainties and constraints are identified that prevent the consistency of a forced model with its fully coupled counterpart. Qualitative discrepancies between the two modelling approaches are highlighted, which lead to an overestimation of predictability and produce artificial predictability in the forced system. The results suggest that bistability and intermittent predictability, when found in a forced model set-up, should always be cross-validated with alternative coupling designs before being taken for granted. All in this, this thesis contributes to the fundamental issue of dealing with uncertainties the climate modelling community is confronted with. Although some uncertainties allow for including them in the interpretation of the model results, intrinsic uncertainties could be identified, which are inevitable within a certain modelling paradigm and are provoked by the specific modelling approach.
Forum: EU-Diplomatie im Jahre 2020
We describe an experiment to gather original data on geometrical aspects of pointing. In particular, we are focusing upon the concept of the pointing cone, a geometrical model of a pointing’s extension. In our setting we employed methodological and technical procedures of a new type to integrate data from annotations as well as from tracker recordings. We combined exact information on position and orientation with rater’s classifications. Our first results seem to challenge classical linguistic and philosophical theories of demonstration in that they advise to separate pointings from reference.
We consider quasicomplexes of Boutet de Monvel operators in Sobolev spaces on a smooth compact manifold with boundary. To each quasicomplex we associate two complexes of symbols. One complex is defined on the cotangent bundle of the manifold and the other on that of the boundary. The quasicomplex is elliptic if these symbol complexes are exact away from the zero sections. We prove that elliptic quasicomplexes are Fredholm. As a consequence of this result we deduce that a compatibility complex for an overdetermined elliptic boundary problem operator is also Fredholm. Moreover, we introduce the Euler characteristic for elliptic quasicomplexes of Boutet de Monvel operators.
<img src="http://vg00.met.vgwort.de/na/806c85cec18906a64e06" width="1" height="1" alt=""> Subject of this work is the possibility to synchronize nonlinear systems via correlated noise and automatic control. The thesis is divided into two parts. The first part is motivated by field studies on feral sheep populations on two islands of the St. Kilda archipelago, which revealed strong correlations due to environmental noise. For a linear system the population correlation equals the noise correlation (Moran effect). But there exists no systematic examination of the properties of nonlinear maps under the influence of correlated noise. Therefore, in the first part of this thesis the noise-induced correlation of logistic maps is systematically examined. For small noise intensities it can be shown analytically that the correlation of quadratic maps in the fixed-point regime is always smaller than or equal to the noise correlation. In the period-2 regime a Markov model explains qualitatively the main dynamical characteristics. Furthermore, two different mechanisms are introduced which lead to a higher correlation of the systems than the environmental correlation. The new effect of "correlation resonance" is described, i. e. the correlation yields a maximum depending on the noise intensity. In the second part of the thesis an automatic control method is presented which synchronizes different systems in a robust way. This method is inspired by phase-locked loops and is based on a feedback loop with a differential control scheme, which allows to change the phases of the controlled systems. The effectiveness of the approach is demonstrated for controlled phase synchronization of regular oscillators and foodweb models.
The rigorous development, application and validation of distributed hydrological models obligates to evaluate data in a spatially distributed way. In particular, spatial model predictions such as the distribution of soil moisture, runoff generating areas or nutrient-contributing areas or erosion rates, are to be assessed against spatially distributed observations. Also model inputs, such as the distribution of modelling units derived by GIS and remote sensing analyses, should be evaluated against groundbased observations of landscape characteristics. So far, however, quantitative methods of spatial field comparison have rarely been used in hydrology. In this paper, we present algorithms that allow to compare observed and simulated spatial hydrological data. The methods can be applied for binary and categorical data on regular grids. They comprise cell-by-cell algorithms, cell-neighbourhood approaches that account for fuzziness of location, and multi-scale algorithms that evaluate the similarity of spatial fields with changing resolution. All methods provide a quantitative measure of the similarity of two maps. The comparison methods are applied in two mountainous catchments in southern Germany (Brugga, 40 km<sup>2) and Austria (Löhnersbach, 16 km<sup>2). As an example of binary hydrological data, the distribution of saturated areas is analyzed in both catchments. For categorical data, vegetation zones that are associated with different runoff generation mechanisms are analyzed in the Löhnersbach. Mapped spatial patterns are compared to simulated patterns from terrain index calculations and from satellite image analysis. It is discussed how particular features of visual similarity between the spatial fields are captured by the quantitative measures, leading to recommendations on suitable algorithms in the context of evaluating distributed hydrological models.
Global Circulation Models of climate predict not only a change of annual precipitation amounts but also a shift in the daily distribution. To improve the understanding of the importance of daily rain pattern for annual plant communities, which represent a large portion of semi-natural vegetation in the Middle East, I used a detailed, spatially explicit model. The model explicitly considers water storage in the soil and has been parameterized and validated with data collected in field experiments in Israel and data from the literature. I manipulated daily rainfall variability by increasing the mean daily rain intensity on rainy days (MDI, rain volume/day) and decreasing intervals between rainy days while keeping the mean annual amount constant. In factorial combination, I also increased mean annual precipitation (MAP). I considered five climatic regions characterized by 100, 300, 450, 600, and 800 mm MAP. Increasing MDI decreased establishment when MAP was >250 mm but increased establishment at more arid sites. The negative effect of increasing MDI was compensated by increasing mortality with increasing MDI in dry and typical Mediterranean regions (c. 360–720 mm MAP). These effects were strongly tied to water availability in upper and lower soil layers and modified by competition among seedlings and adults. Increasing MAP generally increased water availability, establishment, and density. The order of magnitudes of MDI and MAP effects overlapped partially so that their combined effect is important for projections of climate change effects on annual vegetation. The effect size of MAP and MDI followed a sigmoid curve along the MAP gradient indicating that the semi-arid region (≈300 mm MAP) is the most sensitive to precipitation change with regard to annual communitie
In a recent contribution in Nature (vol. 442, pp. 555-558) Austin & Vivanco showed that sunlight is the dominant factor for decomposition of grass litter in a semi-arid grassland in Argentine. The quantification of this effect was portrayed as a novel finding. I put this result in the context of three other publications from as early as 1980 that quantified photodegradation. My synopsis shows that photodegradation is an important process in semi-arid grasslands in South America, North America and eastern Europe.
This contribution describes a generator of stochastic time series of daily precipitation for the interior of Israel from c. 90 to 900 mm mean annual precipitation (MAP) as a tool for studies of daily rain variability. The probability of rainfall on a given day of the year is described by a regular Gaussian peak curve function. The amount of rain is drawn randomly from an exponential distribution whose mean is the daily mean rain amount (averaged across years for each day of the year) described by a flattened Gaussian peak curve. Parameters for the curves have been calculated from monthly aggregated, long-term rain records from seven meteorological stations. Parameters for arbitrary points on the MAP gradient are calculated from a regression equation with MAP as the only independent variable. The simple structure of the generator allows it to produce time series with daily rain patterns that are projected under climate change scenarios and simultaneously control MAP. Increasing within-year variability of daily precipitation amounts also increases among-year variability of MAP as predicted by global circulation models. Thus, the time series incorporate important characteristics for climate change research and represent a flexible tool for simulations of daily vegetation or surface hydrology dynamics.
Germination rates and germination fractions of seeds can be predicted well by the hydrothermal time (HTT) model. Its four parameters hydrothermal time, minimum soil temperature, minimum soil moisture, and variation of minimum soil moisture, however, must be determined by lengthy germination experiments at combinations of several levels of soil temperature and moisture. For some applications of the HTT model it is more important to have approximate estimates for many species rather than exact values for only a few species. We suggest that minimum temperature and variation of minimum moisture can be estimated from literature data and expert knowledge. This allows to derive hydrothermal time and minimum moisture from existing data from germination experiments with one level of temperature and moisture. We applied our approach to a germination experiment comparing germination fractions of wild annual species along an aridity gradient in Israel. Using this simplified approach we estimated hydrothermal time and minimum moisture of 36 species. Comparison with exact data for three species shows that our method is a simple but effective method for obtaining parameters for the HTT model. Hydrothermal time and minimum moisture supposedly indicate climate related germination strategies. We tested whether these two parameters varied with the climate at the site where the seeds had been collected. We found no consistent variation with climate across species, suggesting that variation is more strongly controlled by site-specific factors.
We present a formal analysis of iconic coverbal gesture. Our model describes the incomplete meaning of gesture that’s derivable from its form, and the pragmatic reasoning that yields a more specific interpretation. Our formalism builds on established models of discourse interpretation to capture key insights from the descriptive literature on gesture: synchronous speech and gesture express a single thought, but while the form of iconic gesture is an important clue to its interpretation, the content of gesture can be resolved only by linking it to its context.
Near-infrared (NIR) absorption spectroscopy with tunable diode lasers allows the simultaneous detection of the three most important isotopologues of carbon dioxide (<SUP>12</SUP>CO<SUB>2</SUB>, <SUP>13</SUP>CO<SUB>2</SUB>, <SUP>12</SUP>C<SUP>18</SUP>O<SUP>16</SUP>O) and carbon monoxide (<SUP>12</SUP>CO, <SUP>13</SUP>CO, <SUP>12</SUP>C<SUP>18</SUP>O). The flexible and compact fiber-optic tunable diode laser absorption spectrometer (TDLAS) allows selective measurements of CO<SUB>2</SUB> and CO with high isotopic resolution without sample preparation since there is no interference with water vapour. For each species, linear calibration plots with a dynamic range of four orders of magnitude and detection limits (LOD) in the range of a few ppm were obtained utilizing wavelength modulation spectroscopy (WMS) with balanced detection in a Herriott-type multipass cell. The high performance of the apparatus is illustrated by fill-evacuation-refill cycles.
Since 1971, the Freudenthal Institute has developed an approach to mathematics education named Realistic Mathematics Education (RME). The philosophy of RME is based on Hans Freudenthal’s concept of ‘mathematics as a human activity’. Prof. Hans Freudenthal (1905-1990), a mathematician and educator, believes that ‘ready-made mathematics’ should not be taught in school. By contrast, he urges that students should be offered ‘realistic situations’ so that they can rediscover from informal to formal mathematics. Although mathematics education in Vietnam has some achievements, it still encounters several challenges. Recently, the reform of teaching methods has become an urgent task in Vietnam. It appears that Vietnamese mathematics education lacks necessary theoretical frameworks. At first sight, the philosophy of RME is suitable for the orientation of the teaching method reform in Vietnam. However, the potential of RME for mathematics education as well as the ability of applying RME to teaching mathematics is still questionable in Vietnam. The primary aim of this dissertation is to research into abilities of applying RME to teaching and learning mathematics in Vietnam and to answer the question “how could RME enrich Vietnamese mathematics education?”. This research will emphasize teaching geometry in Vietnamese middle school. More specifically, the dissertation will implement the following research tasks: • Analyzing the characteristics of Vietnamese mathematics education in the ‘reformed’ period (from the early 1980s to the early 2000s) and at present; • Implementing a survey of 152 middle school teachers’ ideas from several Vietnamese provinces and cities about Vietnamese mathematics education; • Analyzing RME, including Freudenthal’s viewpoints for RME and the characteristics of RME; • Discussing how to design RME-based lessons and how to apply these lessons to teaching and learning in Vietnam; • Experimenting RME-based lessons in a Vietnamese middle school; • Analyzing the feedback from the students’ worksheets and the teachers’ reports, including the potentials of RME-based lessons for Vietnamese middle school and the difficulties the teachers and their students encountered with RME-based lessons; • Discussing proposals for applying RME-based lessons to teaching and learning mathematics in Vietnam, including making suggestions for teachers who will apply these lessons to their teaching and designing courses for in-service teachers and teachers-in training. This research reveals that although teachers and students may encounter some obstacles while teaching and learning with RME-based lesson, RME could become a potential approach for mathematics education and could be effectively applied to teaching and learning mathematics in Vietnamese school.
We consider the problem of testing whether the density of a mul- tivariate random variable can be expressed by a prespecified copula function and the marginal densities. The proposed test procedure is based on the asymptotic normality of the properly standardized integrated squared distance between a multivariate kernel density estimator and an estimator of its expectation under the hypothesis. The test of independence is a special case of this approach.
The accelerated life time model is considered. First, test procedures for testing the parameter of a parametric acceleration function is investigated; this is done under the assumption of parametric and nonparametric baseline distribution. Further, based on nonparametric estimators for regression functions tests are proposed for checking whether a parametric acceleration function is appropriate to model the influence of the covariates. Resampling procedures are discussed for the realization of these methods. Simulations complete the considerations.
In the present work, phenomena in the ionosphere are studied, which are connected with earthquakes (16 events) having a depth of less than 50 km and a magnitude M larger than 4. Analysed are night-time Es-spread effects using data of the vertical sounding station Petropavlovsk- Kanchatsky (φ=53.0°, λ=158.7°) from May 2004 until August 2004 registered every 15 minutes. It is found that the maximum distance of the earthquake from the sounding station, where pre-seismic phenomena are yet observable, depends on the magnitude of the earthquake. Further it is shown that 1-2 days before the earthquakes, in the premidnight hours, the appearance of Es-spread increases. The reliability of this increase amounts to 0.95.
The statistical analysis of the variations of the dayly-mean frequency of the maximum ionospheric electron density foF2 is performed in connection with the occurrence of (more than 60) earthquakes with magnitudes M > 6.0, depths h < 80 km and distances from the vertical sounding station R < 1000 km. For the study, data of the Tokyo sounding station are used, which were registered every hour in the years 1957-1990. It is shown that, on the average, foF2 decreases before the earthquakes. One day before the shock the decrease amounts to about 5 %. The statistical reliability of this phenomenon is obtained to be better than 0.95. Further, the variations of the occurrence probability of the turbulization of the F-layer (F spread) are investigated for (more than 260) earthquakes with M > 5.5, h < 80 km, R < 1000 km. For the analysis, data of the Japanese station Akita from 1969-1990 are used, which were obtained every hour. It is found that before the earthquakes the occurrence probability of F spread decreases. In the week before the event, the decrease has values of more than 10 %. The statistical reliability of this phenomenon is also larger than 0.95. Examining the seismo-ionospheric effects, here periods of time with weak heliogeomagnetic disturbances are considered, the Wolf number is less than 100 and the index ∑ Kp is smaller than 30.
Three quantum cryptographic protocols of multiuser quantum networks with embedded authentication, allowing quantum key distribution or quantum direct communication, are discussed in this work. The security of the protocols against different types of attacks is analysed with a focus on various impersonation attacks and the man-in-the-middle attack. On the basis of the security analyses several improvements are suggested and implemented in order to adjust the investigated vulnerabilities. Furthermore, the impact of the eavesdropping test procedure on impersonation attacks is outlined. The framework of a general eavesdropping test is proposed to provide additional protection against security risks in impersonation attacks.
Two examples of our biophotonic research utilizing nanoparticles are presented, namely laser-based fluoroimmuno analysis and in-vivo optical oxygen monitoring. Results of the work include significantly enhanced sensitivity of a homogeneous fluorescence immunoassay and markedly improved spatial resolution of oxygen gradients in root nodules of a legume species.
Demonstratives, in particular gestures that "only" accompany speech, are not a big issue in current theories of grammar. If we deal with gestures, fixing their function is one big problem, the other one is how to integrate the representations originating from different channels and, ultimately, how to determine their composite meanings. The growing interest in multi-modal settings, computer simulations, human-machine interfaces and VRapplications increases the need for theories ofmultimodal structures and events. In our workshopcontribution we focus on the integration of multimodal contents and investigate different approaches dealing with this problem such as Johnston et al. (1997) and Johnston (1998), Johnston and Bangalore (2000), Chierchia (1995), Asher (2005), and Rieser (2005).
Classical SDRT (Asher and Lascarides, 2003) discussed essential features of dialogue like adjacency pairs or corrections and up-dating. Recent work in SDRT (Asher, 2002, 2005) aims at the description of natural dialogue. We use this work to model situated communication, i.e. dialogue, in which sub-sentential utterances and gestures (pointing and grasping) are used as conventional modes of communication. We show that in addition to cognitive modelling in SDRT, capturing mental states and speech-act related goals, special postulates are needed to extract meaning out of contexts. Gestural meaning anchors Discourse Referents in contextually given domains. Both sorts of meaning are fused with the meaning of fragments to get at fully developed dialogue moves. This task accomplished, the standard SDRT machinery, tagged SDRSs, rhetorical relations, the up-date mechanism, and the Maximize Discourse Coherence constraint generate coherent structures. In sum, meanings from different verbal and non-verbal sources are assembled using extended SDRT to form coherent wholes.
What’s in an irish name?
(2006)
Content: 1. Introduction: The Irish Patronymic System Prior to 1600 2. Anglicisation Pressure 3. Anglicisation: 1600-1900 3.1. Phonetic Approximation 3.2. Simplification 3.3. Translation 3.4. Mistranslation 3.5. Equivalence with Existing English Surname 3.6. Multiplicity of Anglicised Forms 3.7. Anglicisation of Prefixes 4. The Call to De-Anglicise 5. Current Personal Naming Patterns in Ireland 5.1. Current Modern Irish 6. Traditional Naming: “X (Son/Daughter) of Y (Son/Daughter) of Z” 7. Nicknames 8. Conclusion
How does a shared lexicon arise in population of agents with differing lexicons, and how can this shared lexicon be maintained over multiple generations? In order to get some insight into these questions we present an ALife model in which the lexicon dynamics of populations that possess and lack metacommunicative interaction (MCI) capabilities are compared. We ran a series of experiments on multi-generational populations whose initial state involved agents possessing distinct lexicons. These experiments reveal some clear differences in the lexicon dynamics of populations that acquire words solely by introspection contrasted with populations that learn using MCI or using a mixed strategy of introspection and MCI. The lexicon diverges at a faster rate for an introspective population, eventually collapsing to one single form which is associated with all meanings. This contrasts sharply with MCI capable populations in which a lexicon is maintained, where every meaning is associated with a unique word. We also investigated the effect of increasing the meaning space and showed that it speeds up the lexicon divergence for all populations irrespective of their acquisition method.
We develop an approach to the problem of optimal recovery of continuous linear functionals in Banach spaces through information on a finite number of given functionals. The results obtained are applied to the problem of the best analytic continuation from a finite set in the complex space Cn, n ≥ 1, for classes of entire functions of exponential type which belong to the space Lp, 1 < p < 1, on the real subspace of Cn. These latter are known as Wiener classes.
We study the Cauchy problem for the oscillation equation of the couple-stress theory of elasticity in a bounded domain in R3. Both the displacement and stress are given on a part S of the boundary of the domain. This problem is densely solvable while data of compact support in the interior of S fail to belong to the range of the problem. Hence the problem is ill-posed which makes the standard calculi of Fourier integral operators inapplicable. If S is real analytic the Cauchy-Kovalevskaya theorem applies to guarantee the existence of a local solution. We invoke the special structure of the oscillation equation to derive explicit conditions of global solvability and an approximation solution.
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
Polyelectrolyte microcapsules containing stimuli-responsive polymers have potential applications in the fields of sensors or actuators, stimulable microcontainers and controlled drug delivery. Such capsules were prepared, with the focus on pH-sensitivity and carbohydrate-sensing. First, pH-responsive polyelectrolyte capsules were produced by means of electrostatic layer-by-layer assembly of oppositely charged weak polyelectrolytes onto colloidal templates that were subsequently removed. The capsules were composed of poly(allylamine hydrochloride) (PAH) and poly(methacrylic acid) (PMA) or poly(4-vinylpyridine) (P4VP) and PMA and varied considerably in their hydrophobicity and the influence of secondary interactions. These polymers were assembled onto CaCO3 and SiO2 particles with diameters of ~ 5 µm, and a new method for the removal of the silica template under mild conditions was proposed. The pH-dependent stability of PAH/PMA and P4VP/PMA capsules was studied by confocal laser scanning microscopy (CLSM). They were stable over a wide pH-range and exhibited a pronounced swelling at the edges of stability, which was attributed to uncompensated positive or negative charges within the multilayers. The swollen state could be stabilized when the electrostatic repulsion was counteracted by hydrogen-bonding, hydrophobic interactions or polymeric entanglement. This stabilization made it possible to reversibly swell and shrink the capsules by tuning the pH of the solution. The pH-dependent ionization degree of PMA was used to modulate the binding of calcium ions. In addition to the pH-sensitivity, the stability and the swelling degree of these capsules at a given pH could be modified, when the ionic strength of the medium was altered. The reversible swelling was accompanied by reversible permeability changes for low and high molecular weight substances. The permeability for glucose was evaluated by studying the time-dependence of the buckling of the capsule walls in glucose solutions and the reversible permeability modulation was used for the encapsulation of polymeric material. A theoretical model was proposed to explain the pH-dependent size variations that took into account an osmotic expanding force and an elastic restoring force to evaluate the pH-dependent size changes of weak polyelectrolyte capsules. Second, sugar-sensitive multilayers were assembled using the reversible covalent ester formation between the polysaccharide mannan and phenylboronic acid moieties that were grafted onto poly(acrylic acid) (PAA). The resulting multilayer films were sensitive to several carbohydrates, showing the highest sensitivity to fructose. The response to carbohydrates resulted from the competitive binding of small molecular weight sugars and mannan to the boronic acid groups within the film, and was observed as a fast dissolution of the multilayers, when they were brought into contact with the sugar-containing solution above a critical concentration. It was also possible to prepare carbohydrate-sensitive multilayer capsules, and their sugar-dependent stability was investigated by following the release of encapsulated rhodamine-labeled bovine serum albumin (TRITC-BSA).
Content: 1. Perfect to Preterite? 2. A Past Grammaticalisation Path for Be after V-ing 2.1. Perfect Grams and Sources 2.2. Perfect Distinctions and Perfect-Preterite Evolution 3. Semantic History of Past-Time Be After V-ing 3.1. Perfect Uses, 1670-1800 3.2. Perfect Uses, 1801-2000 4. Temporal Adverbials and Uses of Be After V-ing, 1701-2000 4.1. Hodiernal Uses 4.2. Preterite Uses 4.3. How Far Is It after Coming? 5. Conclusion