Refine
Has Fulltext
- yes (177) (remove)
Year of publication
- 2006 (177) (remove)
Document Type
- Doctoral Thesis (46)
- Article (32)
- Conference Proceeding (32)
- Postprint (26)
- Preprint (24)
- Monograph/Edited Volume (10)
- Master's Thesis (3)
- Working Paper (2)
- Part of a Book (1)
- Other (1)
Language
- English (177) (remove)
Keywords
- Fluoreszenz-Resonanz-Energie-Transfer (3)
- Immunoassay (3)
- Nanopartikel (3)
- Nichtlineare Dynamik (3)
- Optimality Theory (3)
- Polyelektrolyt (3)
- climate change (3)
- Ackerschmalwand (2)
- Adverbial Quantification (2)
- Adverbs of Frequency (2)
Institute
- Extern (37)
- Institut für Mathematik (26)
- Department Linguistik (21)
- Institut für Physik und Astronomie (19)
- Institut für Anglistik und Amerikanistik (17)
- Institut für Biochemie und Biologie (17)
- Institut für Chemie (14)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (11)
- Institut für Geowissenschaften (9)
- Department Psychologie (7)
- Institut für Informatik und Computational Science (6)
- Wirtschaftswissenschaften (5)
- Institut für Umweltwissenschaften und Geographie (4)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- WeltTrends e.V. Potsdam (3)
- Historisches Institut (2)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (2)
- Department Erziehungswissenschaft (1)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Romanistik (1)
- Sozialwissenschaften (1)
Since their discovery in 1610 by Galileo Galilei, Saturn's rings continue to fascinate both experts and amateurs. Countless numbers of icy grains in almost Keplerian orbits reveal a wealth of structures such as ringlets, voids and gaps, wakes and waves, and many more. Grains are found to increase in size with increasing radial distance to Saturn. Recently discovered "propeller" structures in the Cassini spacecraft data, provide evidence for the existence of embedded moonlets. In the wake of these findings, the discussion resumes about origin and evolution of planetary rings, and growth processes in tidal environments. In this thesis, a contact model for binary adhesive, viscoelastic collisions is developed that accounts for agglomeration as well as restitution. Collisional outcomes are crucially determined by the impact speed and masses of the collision partners and yield a maximal impact velocity at which agglomeration still occurs. Based on the latter, a self-consistent kinetic concept is proposed. The model considers all possible collisional outcomes as there are coagulation, restitution, and fragmentation. Emphasizing the evolution of the mass spectrum and furthermore concentrating on coagulation alone, a coagulation equation, including a restricted sticking probability is derived. The otherwise phenomenological Smoluchowski equation is reproduced from basic principles and denotes a limit case to the derived coagulation equation. Qualitative and quantitative analysis of the relevance of adhesion to force-free granular gases and to those under the influence of Keplerian shear is investigated. Capture probability, agglomerate stability, and the mass spectrum evolution are investigated in the context of adhesive interactions. A size dependent radial limit distance from the central planet is obtained refining the Roche criterion. Furthermore, capture probability in the presence of adhesion is generally different compared to the case of pure gravitational capture. In contrast to a Smoluchowski-type evolution of the mass spectrum, numerical simulations of the obtained coagulation equation revealed, that a transition from smaller grains to larger bodies cannot occur via a collisional cascade alone. For parameters used in this study, effective growth ceases at an average size of centimeters.
In Allefeld & Kurths [2004], we introduced an approach to multivariate phase synchronization analysis in the form of a Synchronization Cluster Analysis (SCA). A statistical model of a synchronization cluster was described, and an abbreviated instruction on how to apply this model to empirical data was given, while an implementation of the corresponding algorithm was (and is) available from the authors. In this letter, the complete details on how the data analysis algorithm is to be derived from the model are filled in.
Biochemical and physiological studies of Arabidopsis thaliana Diacylglycerol Kinase 7 (AtDGK7)
(2006)
A family of diacylglycerol kinases (DGK) phosphorylates the substrate diacylglycerol (DAG) to generate phosphatidic acid (PA) . Both molecules, DAG and PA, are involved in signal transduction pathways. In the model plant Arabidopsis thaliana, seven candidate genes (named AtDGK1 to AtDGK7) code for putative DGK isoforms. Here I report the molecular cloning and characterization of AtDGK7. Biochemical, molecular and physiological experiments of AtDGK7 and their corresponding enzyme are analyzed. Information from Genevestigator says that AtDGK7 gene is expressed in seedlings and adult Arabidopsis plants, especially in flowers. The AtDGK7 gene encodes the smallest functional DGK predicted in higher plants; but also, has an alternative coding sequence containing an extended AtDGK7 open reading frame, confirmed by PCR and submitted to the GenBank database (under the accession number DQ350135). The new cDNA has an extension of 439 nucleotides coding for 118 additional amino acids The former AtDGK7 enzyme has a predicted molecular mass of ~41 kDa and its activity is affected by pH and detergents. The DGK inhibitor R59022 also affects AtDGK7 activity, although at higher concentrations (i.e. IC50 ~380 µM). The AtDGK7 enzyme also shows a Michaelis-Menten type saturation curve for 1,2-DOG. Calculated Km and Vmax were 36 µM 1,2-DOG and 0.18 pmol PA min-1 mg of protein-1, respectively, under the assay conditions. Former protein AtDGK7 are able to phosphorylate different DAG analogs that are typically found in plants. The new deduced AtDGK7 protein harbors the catalytic DGKc and accessory domains DGKa, instead the truncated one as the former AtDGK7 protein (Gomez-Merino et al., 2005).
In two experiments, many annotators marked antecedents for discourse deixis as unconstrained regions of text. The experiments show that annotators do converge on the identity of these text regions, though much of what they do can be captured by a simple model. Demonstrative pronouns are more likely than definite descriptions to be marked with discourse antecedents. We suggest that our methodology is suitable for the systematic study of discourse deixis.
Nonaqueous synthesis of metal oxide nanoparticles and their assembly into mesoporous materials
(2006)
This thesis mainly consist of two parts, the synthesis of several kinds of technologically interesting crystalline metal oxide nanoparticles via nonaqueous sol-gel process and the formation of mesoporous metal oxides using some of these nanoparticles as building blocks via evaporation induced self-assembly (EISA) technique. In the first part, the experimental procedures and characterization results of successful syntheses of crystalline tin oxide and tin doped indium oxide (ITO) nanoparticles are reported. SnO2 nanoparticles exhibit monodisperse particle size (3.5 nm in average), high crystallinity and particularly high dispersibility in THF, which enable them to become the ideal particulate precursor for the formation of mesoporous SnO2. ITO nanoparticles possess uniform particle morphology, narrow particle size distribution (5-10 nm), high crystallinity as well as high electrical conductivity. The synthesis approaches and characterization of various mesoporous metal oxides, including TiO2, SnO2, mixture of CeO2 and TiO2, mixture of BaTiO3 and SnO2, are reported in the second part of this thesis. Mesoporous TiO2 and SnO2 are presented as highlights of this part. Mesoporous TiO2 was produced in the forms of both films and bulk material. In the case of mesoporous SnO2, the study was focused on the high order of the porous structure. All these mesoporous metal oxides show high crystallinity, high surface area and rather monodisperse pore sizes, which demonstrate the validity of EISA process and the usage of preformed crystalline nanoparticles as nanobuilding blocks (NBBs) to produce mesoporous metal oxides.
Quantum dots (QDs) are common as luminescing markers for imaging in biological applications because their optical properties seem to be inert against their surrounding solvent. This, together with broad and strong absorption bands and intense, sharp tuneable luminescence bands, makes them interesting candidates for methods utilizing Förster Resonance Energy Transfer (FRET), e. g. for sensitive homogeneous fluoroimmunoassays (FIA). In this work we demonstrate energy transfer from Eu<SUP>3+</SUP>-trisbipyridin (Eu-TBP) donors to CdSe-ZnS-QD acceptors in solutions with and without serum. The QDs are commercially available CdSe-ZnS core-shell particles emitting at 655 nm (QD655). The FRET system was achieved by the binding of the streptavidin conjugated donors with the biotin conjugated acceptors. After excitation of Eu-TBP and as result of the energy transfer, the luminescence of the QD655 acceptors also showed lengthened decay times like the donors. The energy transfer efficiency, as calculated from the decay times of the bound and the unbound components, amounted to 37%. The Förster-radius, estimated from the absorption and emission bands, was ca. 77 Å. The effective binding ratio, which not only depends on the ratio of binding pairs but also on unspecific binding, was obtained from the donor emission dependent on the concentration. As serum promotes unspecific binding, the overall FRET efficiency of the assay was reduced. We conclude that QDs are good substitutes for acceptors in FRET if combined with slow decay donors like Europium. The investigation of the influence of the serum provides guidance towards improving binding properties of QD assays.
Verbal or visual? : How information is distributed across speech and gesture in spatial dialog
(2006)
In spatial dialog like in direction giving humans make frequent use of speechaccompanying gestures. Some gestures convey largely the same information as speech while others complement speech. This paper reports a study on how speakers distribute meaning across speech and gesture, and depending on what factors. Utterance meaning and the wider dialog context were tested by statistically analyzing a corpus of direction-giving dialogs. Problems of speech production (as indicated by discourse markers and disfluencies), the communicative goals, and the information status were found to be influential, while feedback signals by the addressee do not have any influence.
We present a new analysis of illocutionary forces in dialogue. We analyze them as complex conversational moves involving two dimensions: what Speaker commits herself to and what she calls on Addressee to perform. We start from the analysis of speech acts such as confirmation requests or whimperatives, and extend the analysis to seemingly simple speech acts, such as statements and queries. Then, we show how to integrate our proposal in the framework of the Grammar for Conversation (Ginzburg, to app.), which is adequate for modelling agents' information states and how they get updated.
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
Forum: EU-Diplomatie im Jahre 2020
Sucrose synthase (Susy) is a key enzyme of sucrose metabolism, catalysing the reversible conversion of sucrose and UDP to UDP-glucose and fructose. Therefore, its activity, localization and function have been studied in various plant species. It has been shown that Susy can play a role in supplying energy in companion cells for phloem loading (Fu and Park, 1995), provides substrates for starch synthesis (Zrenner et al., 1995), and supplies UDP-glucose for cell wall synthesis (Haigler et al., 2001). Analysis of the Arabidopsis genome identifies six Susy isoforms. The expression of these isoforms was investigated using promoter-reporter gene constructs (GUS) and real time RT-PCR. Although these isoforms are closely related at the protein level they have radically different spatial and temporal patterns of expression in the plant with no two isoforms showing the same distribution. More than one isoform is expressed in all organs examined. Some of them have high but specific expression in particular organs or developmental stages whilst others are constantly expressed throughout the whole plant and across various stages of development. The in planta function of the six Susy isoforms were explored through analysis of T-DNA insertion mutants and RNAi lines. Plants without the expression of individual isoforms show no differences in growth and development, and are not significantly different from wild type plants in soluble sugars, starch and cellulose contents under all growth conditions investigated. Analysis of T-DNA insertion mutant lacking Sus3 isoform that was exclusively expressed in stomata cells only had a minor influence on guard cell osmoregulation and/or bioenergetics. Although none of the sucrose synthases appear to be essential for normal growth under our standard growth conditions, they may be necessary for growth under stress conditions. Different isoforms of sucrose synthase respond differently to various abiotic stresses. It has been shown that oxygen deprivation up regulates Sus1 and Sus4 and increases total Susy activity. However, the analysis of the plants with reduced expression of both Sus1 and Sus4 revealed no obvious effects on plant performance under oxygen deprivation. Low temperature up regulates Sus1 expression but the loss of this isoform has no effect on the freezing tolerance of non acclimated and cold acclimated plants. These data provide a comprehensive overview of the expression of this gene family which supports some of the previously reported roles for Susy and indicates the involvement of specific isoforms in metabolism and/or signalling.
G protein-coupled receptor (GPCR) genes are large gene families in every animal, sometimes making up to 1-2% of the animal's genome. Of all insect GPCRs, the neurohormone (neuropeptide, protein hormone, biogenic amine) GPCRs are especially important, because they, together with their ligands, occupy a high hierarchic position in the physiology of insects and steer crucial processes such as development, reproduction, and behavior. In this paper, we give a review of our current knowledge on Drosophila melanogaster GPCRs and use this information to annotate the neurohormone GPCR genes present in the recently sequenced genome from the honey bee Apis mellifera. We found 35 neuropeptide receptor genes in the honey bee (44 in Drosophila) and two genes, coding for leucine-rich repeats-containing protein hormone GPCRs (4 in Drosophila). In addition, the honey bee has 19 biogenic amine receptor genes (21 in Drosophila). The larger numbers of neurohormone receptors in Drosophila are probably due to gene duplications that occurred during recent evolution of the fly. Our analyses also yielded the likely ligands for 40 of the 56 honey bee neurohormone GPCRs identified in this study. In addition, we made some interesting observations on neurohormone GPCR evolution and the evolution and co-evolution of their ligands. For neuropeptide and protein hormone GPCRs, there appears to be a general co-evolution between receptors and their ligands. This is in contrast to biogenic amine GPCRs, where evolutionarily unrelated GPCRs often bind to the same biogenic amine, suggesting frequent ligand exchanges ("ligand hops") during GPCR evolution. (c) 2006 Elsevier Ltd. All rights reserved.
A casual look at regional unemployment rates reveals that there are vast differences, which cannot be explained by different institutional settings. Our paper attempts to trace these differences in the labor market performance back to the regions' specialization in products that are more or less advanced in their product cycle. The model we develop shows how individual profit and utility maximization endogenously yields higher employment levels in the beginning. In later phases, however, employment decreases in the presence of process innovation. Our model suggests that the only way to escape from this vicious circle is to specialize in products that are at the beginning of their "economic life". The model is based on an interaction of demand and supply side forces.
We prove the existence of a class of local in time solutions, including static solutions, of the Einstein-Euler system. This result is the relativistic generalisation of a similar result for the Euler-Poisson system obtained by Gamblin [8]. As in his case the initial data of the density do not have compact support but fall off at infinity in an appropriate manner. An essential tool in our approach is the construction and use of weighted Sobolev spaces of fractional order. Moreover, these new spaces allow us to improve the regularity conditions for the solutions of evolution equations. The details of this construction, the properties of these spaces and results on elliptic and hyperbolic equations will be presented in a forthcoming article.
Since 2002, keywords like service-oriented engineering, service-oriented computing, and service-oriented architecture have been widely used in research, education, and enterprises. These and related terms are often misunderstood or used incorrectly. To correct these misunderstandings, a deeper knowledge of the concepts, the historical backgrounds, and an overview of service-oriented architectures is demanded and given in this paper.
The main claim of this paper is that the minimalist framework and optimality theory adopt more or less the same architecture of grammar: both assume that a generator defines a set S of potentially well-formed expressions that can be generated on the basis of a given input, and that there is an evaluator that selects the expressions from S that are actually grammatical in a given language L. The paper therefore proposes a model of grammar in which the strengths of the two frameworks are combined: more specifically, it is argued that the computational system of human language CHL from MP creates a set S of potentially well-formed expressions, and that these are subsequently evaluated in an optimality theoretic fashion.
An increasing number of applications requires user interfaces that facilitate the handling of large geodata sets. Using virtual 3D city models, complex geospatial information can be communicated visually in an intuitive way. Therefore, real-time visualization of virtual 3D city models represents a key functionality for interactive exploration, presentation, analysis, and manipulation of geospatial data. This thesis concentrates on the development and implementation of concepts and techniques for real-time city model visualization. It discusses rendering algorithms as well as complementary modeling concepts and interaction techniques. Particularly, the work introduces a new real-time rendering technique to handle city models of high complexity concerning texture size and number of textures. Such models are difficult to handle by current technology, primarily due to two problems: - Limited texture memory: The amount of simultaneously usable texture data is limited by the memory of the graphics hardware. - Limited number of textures: Using several thousand different textures simultaneously causes significant performance problems due to texture switch operations during rendering. The multiresolution texture atlases approach, introduced in this thesis, overcomes both problems. During rendering, it permanently maintains a small set of textures that are sufficient for the current view and the screen resolution available. The efficiency of multiresolution texture atlases is evaluated in performance tests. To summarize, the results demonstrate that the following goals have been achieved: - Real-time rendering becomes possible for 3D scenes whose amount of texture data exceeds the main memory capacity. - Overhead due to texture switches is kept permanently low, so that the number of different textures has no significant effect on the rendering frame rate. Furthermore, this thesis introduces two new approaches for real-time city model visualization that use textures as core visualization elements: - An approach for visualization of thematic information. - An approach for illustrative visualization of 3D city models. Both techniques demonstrate that multiresolution texture atlases provide a basic functionality for the development of new applications and systems in the domain of city model visualization.
Our work goes in two directions. At first we want to transfer definitions, concepts and results of the theory of hyperidentities and solid varieties from the total to the partial case. (1) We prove that the operators chi^A_RNF and chi^E_RNF are only monotone and additive and we show that the sets of all fixed points of these operators are characterized only by three instead of four equivalent conditions for the case of closure operators. (2) We prove that V is n − SF-solid iff clone^SF V is free with respect to itself, freely generated by the independent set {[fi(x_1, . . . , x_n)]Id^SF_n V | i \in I}. (3) We prove that if V is n-fluid and ~V |P(V ) =~V −iso |P(V ) then V is kunsolid for k >= n (where P(V ) is the set of all V -proper hypersubstitutions of type \tau ). (4) We prove that a strong M-hyperquasi-equational theory is characterized by four equivalent conditions. The second direction of our work is to follow ideas which are typical for the partial case. (1) We characterize all minimal partial clones which are strongly solidifyable. (2)We define the operator Chi^A_Ph where Ph is a monoid of regular partial hypersubstitutions.Using this concept, we define the concept of a Phyp_R(\tau )-solid strong regular variety of partial algebras and we prove that a PHyp_R(\tau )-solid strong regular variety satisfies four equivalent conditions.
This study introduces a method for multiparallel analysis of small organic compounds in the unicellular green alga Chlamydomonas reinhardtii, one of the premier model organisms in cell biology. The comprehensive study of the changes of metabolite composition, or metabolomics, in response to environmental, genetic or developmental signals is an important complement of other functional genomic techniques in the effort to develop an understanding of how genes, proteins and metabolites are all integrated into a seamless and dynamic network to sustain cellular functions. The sample preparation protocol was optimized to quickly inactivate enzymatic activity, achieve maximum extraction capacity and process large sample quantities. As a result of the rapid sampling, extraction and analysis by gas chromatography coupled to time-of-flight mass spectrometry (GC-TOF) more than 800 analytes from a single sample can be measured, of which over a 100 could be positively identified. As part of the analysis of GC-TOF raw data, aliquot ratio analysis to systematically remove artifact signals and tools for the use of principal component analysis (PCA) on metabolomic datasets are proposed. Cells subjected to nitrogen (N), phosphorus (P), sulfur (S) or iron (Fe) depleted growth conditions develop highly distinctive metabolite profiles with metabolites implicated in many different processes being affected in their concentration during adaptation to nutrient deprivation. Metabolite profiling allowed characterization of both specific and general responses to nutrient deprivation at the metabolite level. Modulation of the substrates for N-assimilation and the oxidative pentose phosphate pathway indicated a priority for maintaining the capability for immediate activation of N assimilation even under conditions of decreased metabolic activity and arrested growth, while the rise in 4-hydroxyproline in S deprived cells could be related to enhanced degradation of proteins of the cell wall. The adaptation to sulfur deficiency was analyzed with greater temporal resolution and responses of wild-type cells were compared with mutant cells deficient in SAC1, an important regulator of the sulfur deficiency response. Whereas concurrent metabolite depletion and accumulation occurs during adaptation to S deprivation in wild-type cells, the sac1 mutant strain is characterized by a massive incapability to sustain many processes that normally lead to transient or permanent accumulation of the levels of certain metabolites or recovery of metabolite levels after initial down-regulation. For most of the steps in arginine biosynthesis in Chlamydomonas mutants have been isolated that are deficient in the respective enzyme activities. Three strains deficient in the activities of N-acetylglutamate-5-phosphate reductase (arg1), N2 acetylornithine-aminotransferase (arg9), and argininosuccinate lyase (arg2), respectively, were analyzed with regard to activation of endogenous arginine biosynthesis after withdrawal of externally supplied arginine. Enzymatic blocks in the arginine biosynthetic pathway could be characterized by precursor accumulation, like the amassment of argininosuccinate in arg2 cells, and depletion of intermediates occurring downstream of the enzymatic block, e.g. N2-acetylornithine, ornithine, and argininosuccinate depletion in arg9 cells. The unexpected finding of substantial levels of the arginine pathway intermediates N-acetylornithine, citrulline, and argininosuccinate downstream the enzymatic block in arg1 cells provided an explanation for the residual growth capacity of these cells in the absence of external arginine sources. The presence of these compounds, together with the unusual accumulation of N-Acetylglutamate, the first intermediate that commits the glutamate backbone to ornithine and arginine biosynthesis, in arg1 cells suggests that alternative pathways, possibly involving the activity of ornithine aminotransferase, may be active when the default reaction sequence to produce ornithine via acetylation of glutamate is disabled.
In view of the importance of charge storage in polymer electrets for electromechanical transducer applications, the aim of this work is to contribute to the understanding of the charge-retention mechanisms. Furthermore, we will try to explain how the long-term storage of charge carriers in polymeric electrets works and to identify the probable trap sites. Charge trapping and de-trapping processes were investigated in order to obtain evidence of the trap sites in polymeric electrets. The charge de-trapping behavior of two particular polymer electrets was studied by means of thermal and optical techniques. In order to obtain evidence of trapping or de-trapping, charge and dipole profiles in the thickness direction were also monitored. In this work, the study was performed on polyethylene terephthalate (PETP) and on cyclic-olefin copolymers (COCs). PETP is a photo-electret and contains a net dipole moment that is located in the carbonyl group (C = O). The electret behavior of PETP arises from both the dipole orientation and the charge storage. In contrast to PETP, COCs are not photo-electrets and do not exhibit a net dipole moment. The electret behavior of COCs arises from the storage of charges only. COC samples were doped with dyes in order to probe their internal electric field. COCs show shallow charge traps at 0.6 and 0.11 eV, characteristic for thermally activated processes. In addition, deep charge traps are present at 4 eV, characteristic for optically stimulated processes. PETP films exhibit a photo-current transient with a maximum that depends on the temperature with an activation energy of 0.106 eV. The pair thermalization length (rc) calculated from this activation energy for the photo-carrier generation in PETP was estimated to be approx. 4.5 nm. The generated photo-charge carriers can recombine, interact with the trapped charge, escape through the electrodes or occupy an empty trap. PETP possesses a small quasi-static pyroelectric coefficient (QPC): ~0.6 nC/(m²K) for unpoled samples, ~60 nC/(m²K) for poled samples and ~60 nC/(m²K) for unpoled samples under an electric bias (E ~10 V/µm). When stored charges generate an internal electric field of approx. 10 V/µm, they are able to induce a QPC comparable to that of the oriented dipoles. Moreover, we observe charge-dipole interaction. Since the raw data of the QPC-experiments on PETP samples is noisy, a numerical Fourier-filtering procedure was applied. Simulations show that the data analysis is reliable when the noise level is up to 3 times larger than the calculated pyroelectric current for the QPC. PETP films revealed shallow traps at approx. 0.36 eV during thermally-stimulated current measurements. These energy traps are associated with molecular dipole relaxations (C = O). On the other hand, photo-activated measurements yield deep charge traps at 4.1 and 5.2 eV. The observed wavelengths belong to the transitions in PETP that are analogous to the π - π* benzene transitions. The observed charge de-trapping selectivity in the photocharge decay indicates that the charge detrapping is from a direct photon-charge interaction. Additionally, the charge de-trapping can be facilitated by photo-exciton generation and the interaction of the photo-excitons with trapped charge carriers. These results indicate that the benzene rings (C6H4) and the dipolar groups (C = O) can stabilize and share an extra charge carrier in a chemical resonance. In this way, this charge could be de-trapped in connection with the photo-transitions of the benzene ring and with the dipole relaxations. The thermally-activated charge release shows a difference in the trap depth to its optical counterpart. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. That is, the processes of charge detrapping from shallow traps are related to secondary forces. The processes of charge de-trapping from deep traps are related to primary forces. Furthermore, the presence of deep trap levels causes the stability of the charge for long periods of time.
Forum: EU-Diplomatie im Jahre 2020
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
It is shown that an elliptic scattering operator A on a compact manifold with boundary with operator valued coefficients in the morphisms of a bundle of Banach spaces of class (HT ) and Pisier’s property (α) has maximal regularity (up to a spectral shift), provided that the spectrum of the principal symbol of A on the scattering cotangent bundle avoids the right half-plane. This is accomplished by representing the resolvent in terms of pseudodifferential operators with R-bounded symbols, yielding by an iteration argument the R-boundedness of λ(A − λ)−1 in R(λ)≥ τ for some τ ∈ IR. To this end, elements of a symbolic and operator calculus of pseudodifferential operators with R-bounded symbols are introduced. The significance of this method for proving maximal regularity results for partial differential operators is underscored by considering also a more elementary situation of anisotropic elliptic operators on Rd with operator valued coefficients.
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
When Galactic microlensing events of stars are observed, one usually measures a symmetric light curve corresponding to a single lens, or an asymmetric light curve, often with caustic crossings, in the case of a binary lens system. In principle, the fraction of binary stars at a certain separation range can be estimated based on the number of measured microlensing events. However, a binary system may produce a light curve which can be fitted well as a single lens light curve, in particullary if the data sampling is poor and the errorbars are large. We investigate what fraction of microlensing events produced by binary stars for different separations may be well fitted by and hence misinterpreted as single lens events for various observational conditions. We find that this fraction strongly depends on the separation of the binary components, reaching its minimum at between 0.6 and 1.0 Einstein radius, where it is still of the order of 5% The Einstein radius is corresponding to few A.U. for typical Galactic microlensing scenarios. The rate for misinterpretation is higher for short microlensing events lasting up to few months and events with smaller maximum amplification. For fixed separation it increases for binaries with more extreme mass ratios. Problem of degeneracy in photometric light curve solution between binary lens and binary source microlensing events was studied on simulated data, and data observed by the PLANET collaboration. The fitting code BISCO using the PIKAIA genetic algorithm optimizing routine was written for optimizing binary-source microlensing light curves observed at different sites, in I, R and V photometric bands. Tests on simulated microlensing light curves show that BISCO is successful in finding the solution to a binary-source event in a very wide parameter space. Flux ratio method is suggested in this work for breaking degeneracy between binary-lens and binary-source photometric light curves. Models show that only a few additional data points in photometric V band, together with a full light curve in I band, will enable breaking the degeneracy. Very good data quality and dense data sampling, combined with accurate binary lens and binary source modeling, yielded the discovery of the lowest-mass planet discovered outside of the Solar System so far, OGLE-2005-BLG-390Lb, having only 5.5 Earth masses. This was the first observed microlensing event in which the degeneracy between a planetary binary-lens and an extreme flux ratio binary-source model has been successfully broken. For events OGLE-2003-BLG-222 and OGLE-2004-BLG-347, the degeneracy was encountered despite of very dense data sampling. From light curve modeling and stellar evolution theory, there was a slight preference to explain OGLE-2003-BLG-222 as a binary source event, and OGLE-2004-BLG-347 as a binary lens event. However, without spectra, this degeneracy cannot be fully broken. No planet was found so far around a white dwarf, though it is believed that Jovian planets should survive the late stages of stellar evolution, and that white dwarfs will retain planetary systems in wide orbits. We want to perform high precision astrometric observations of nearby white dwarfs in wide binary systems with red dwarfs in order to find planets around white dwarfs. We selected a sample of observing targets (WD-RD binary systems, not published yet), which can possibly have planets around the WD component, and modeled synthetic astrometric orbits which can be observed for these targets using existing and future astrometric facilities. Modeling was performed for the astrometric accuracy of 0.01, 0.1, and 1.0 mas, separation between WD and planet of 3 and 5 A.U., binary system separation of 30 A.U., planet masses of 10 Earth masses, 1 and 10 Jupiter masses, WD mass of 0.5M and 1.0 Solar masses, and distances to the system of 10, 20 and 30 pc. It was found that the PRIMA facility at the VLTI will be able to detect planets around white dwarfs once it is operating, by measuring the astrometric wobble of the WD due to a planet companion, down to 1 Jupiter mass. We show for the simulated observations that it is possible to model the orbits and find the parameters describing the potential planetary systems.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
The goal of a Brain-Computer Interface (BCI) consists of the development of a unidirectional interface between a human and a computer to allow control of a device only via brain signals. While the BCI systems of almost all other groups require the user to be trained over several weeks or even months, the group of Prof. Dr. Klaus-Robert Müller in Berlin and Potsdam, which I belong to, was one of the first research groups in this field which used machine learning techniques on a large scale. The adaptivity of the processing system to the individual brain patterns of the subject confers huge advantages for the user. Thus BCI research is considered a hot topic in machine learning and computer science. It requires interdisciplinary cooperation between disparate fields such as neuroscience, since only by combining machine learning and signal processing techniques based on neurophysiological knowledge will the largest progress be made. In this work I particularly deal with my part of this project, which lies mainly in the area of computer science. I have considered the following three main points: <b>Establishing a performance measure based on information theory:</b> I have critically illuminated the assumptions of Shannon's information transfer rate for application in a BCI context. By establishing suitable coding strategies I was able to show that this theoretical measure approximates quite well to what is practically achieveable. <b>Transfer and development of suitable signal processing and machine learning techniques:</b> One substantial component of my work was to develop several machine learning and signal processing algorithms to improve the efficiency of a BCI. Based on the neurophysiological knowledge that several independent EEG features can be observed for some mental states, I have developed a method for combining different and maybe independent features which improved performance. In some cases the performance of the combination algorithm outperforms the best single performance by more than 50 %. Furthermore, I have theoretically and practically addressed via the development of suitable algorithms the question of the optimal number of classes which should be used for a BCI. It transpired that with BCI performances reported so far, three or four different mental states are optimal. For another extension I have combined ideas from signal processing with those of machine learning since a high gain can be achieved if the temporal filtering, i.e., the choice of frequency bands, is automatically adapted to each subject individually. <b>Implementation of the Berlin brain computer interface and realization of suitable experiments:</b> Finally a further substantial component of my work was to realize an online BCI system which includes the developed methods, but is also flexible enough to allow the simple realization of new algorithms and ideas. So far, bitrates of up to 40 bits per minute have been achieved with this system by absolutely untrained users which, compared to results of other groups, is highly successful.
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
When top sports performers fail or “choke” under pressure, everyone asks: why? Research has identified a number of conditions (e.g. an audience) that elicit choking and that moderate (e.g. trait-anxiety) pressure – performance relation. Furthermore, mediating processes have been investigated. For example, explicit monitoring theories link performance failure under psychological stress to an increase in attention paid to a skill and its step-by-step execution (Beilock & Carr, 2001). Many studies have provided support for these ideas. However, so far only overt performance measures have been investigated which do not allow more thorough analyses of processes or performance strategies. But also a theoretical framework has been missing, that could (a) explain the effects of explicit monitoring on skill execution and that (b) makes predictions as to what is being monitored during execution. Consequently in this study, the nodalpoint hypothesis of motor control (Hossner & Ehrlenspiel, 2006) was taken to predict movement changes on three levels of analysis at certain “nodalpoints” within the movement sequence. Performance in two different laboratory tasks was assessed with respect to overt performance (the observable result, for example accuracy in the target), covert performance (description of movement execution, for example the acceleration of body segements) and task exploitation (the utilization of task properties such as covariation). A fake competition (see Beilock & Carr, 2002) was used to invoke pressure. In study 1 a ball bouncing task in a virtual-reality set-up was chosen. Previous studies (de Rugy, Wei, Müller, & Sternad, 2003) have shown that learners are usually able to “passively” exploit the dynamical stability of the system. According to explicit monitoring theories, choking should be expected either if the task itself evokes an “active control” (Experiment 1) or if learners are provided with explicit instructions (Experiment 2). In both experiments, participants first went through a practice phase on day 1. On day 2, following the Baseline Test participants were divided into a High-Stress or No-Stress Group for the final Performance Test. The High-Stress Group entered a fake competition. Overt performance was measured by the Absolute Error (AE) of ball amplitudes from target height; covert performance was measured by Period Modulation between successive hits and task exploitation was measured by Acceleration (AC) at ball-racket impact and Covariation (COV) of impact parameters. To evoke active control in Exp. 1 (N=20), perturbations to the ball flight were introduced. In Exp. 2 (N=39) half of the participants received explicit skill-focused instructions during learning. For overt performance, results generally show an interaction between Stress Group and Test, with better performance (i. e. lower AE) for the High-Stress group in the final Performance Test. This effect is also independent of the Instructions that participants had received during learning (Exp. 2). Similar effects were found for COV but not for AC. In study 2 a visuomotor tracking task in which participants had to pursuit a target cross that was moving on an invisible curve. This curve consisted of 3 segments of 6 turning points sequentially ordered around the x-axis. Participants learned two short movement sequences which were then concatenated to form a single sequence. It was expected that under pressure, this sequence should “fall apart” at the point of concatenation. Overt Performance was assessed by the Root Mean Square Error between target and pursuit cross as well as the Absolute Error at the turning points, covert performance was measured by the Latency from target to pursuit turning and task exploitation was measured by the temporal covariation between successive intervals between turning points. Experiment 3 (intraindividual variation) as well as Experiment 4 (interindividual variation) show performance enhancement in the pressure situation on the overt level with matching results on covert and task exploitation level. Thus, contrary to previous studies, no choking under pressure was found in any of the experiments. This may be interpreted as a failure in the experimental manipulation. But certainly also important characteristics of the task are highlighted. Choking should occur in tasks where performers do not have the time to use action or thought control strategies, that are more relevant to their “self” and that are discrete in nature.
In this paper we compare the behaviour of adverbs of frequency (de Swart 1993) like usually with the behaviour of adverbs of quantity like for the most part in sentences that contain plural definites. We show that sentences containing the former type of Q-adverb evidence that Quantificational Variability Effects (Berman 1991) come about as an indirect effect of quantification over situations: in order for quantificational variability readings to arise, these sentences have to obey two newly observed constraints that clearly set them apart from sentences containing corresponding quantificational DPs, and that can plausibly be explained under the assumption that quantification over (the atomic parts of) complex situations is involved. Concerning sentences with the latter type of Q-adverb, on the other hand, such evidence is lacking: with respect to the constraints just mentioned, they behave like sentences that contain corresponding quantificational DPs. We take this as evidence that Q-adverbs like for the most part do not quantify over the atomic parts of sum eventualities in the cases under discussion (as claimed by Nakanishi and Romero (2004)), but rather over the atomic parts of the respective sum individuals.
Optical methods play an important role in process analytical technologies (PAT). Four examples of optical process and quality sensing (OPQS) are presented, which are based on three important experimental techniques: near-infrared absorption, luminescence quenching, and a novel method, photon density wave (PDW) spectroscopy. These are used to evaluate four process and quality parameters related to beer brewing and polyurethane (PU) foaming processes: the ethanol content and the oxygen (O2) content in beer, the biomass in a bioreactor, and the cellular structures of PU foam produced in a pilot production plant.
We give a construction of an eigenstate for a non-critical level of the Hamiltonian function, and investigate the contribution of Morse critical points to the spectral decomposition. We compare the rigorous result with the series obtained by a perturbation theory. As an example the relation to the spectral asymptotics is discussed.
The ultimate aim of this study is to better understand the relevance of weak electricity in the adaptive radiation of the African mormyrid fish. The chosen model taxon, the genus Campylomormyrus, exhibits a wide diversity of electric organ discharge (EOD) waveform types. Their EOD is age, sex, and species specific and is an important character for discriminating among species that are otherwise cryptic. After having established a complementary set of molecular markers, I examined the radiation of Campylomormyrus by a combined approach of molecular data (sequence data from the mitochondrial cytochrome b and the nuclear S7 ribosomal protein gene, as well as 18 microsatellite loci, especially developed for the genus Campylomormyrus), observation of ontogeny and diversification of EOD waveform, and morphometric analysis of relevant morphological traits. I built up the first convincing phylogenetic hypothesis for the genus Campylomormyrus. Taking advantage of microsatellite data, the identified phylogenetic clades proved to be reproductively isolated biological species. This way I detected at least six species occurring in sympatry near Brazzaville/Kinshasa (Congo Basin). By combining molecular data and EOD analyses, I could show that there are three cryptic species, characterised by their own adult EOD types, hidden under a common juvenile EOD form. In addition, I confirmed that adult male EOD is species-specific and is more different among closely related species than among more distantly related ones. This result and the observation that the EOD changes with maturity suggest its function as a reproductive isolation mechanism. As a result of my morphometric shape analysis, I could assign species types to the identified reproductively isolated groups to produce a sound taxonomy of the group. Besides this, I could also identify morphological traits relevant for the divergences between the identified species. Among them, the variations I found in the shape of the trunk-like snout, suggest the presence of different trophic specializations; therefore, this trait might have been involved in the ecological radiation of the group. In conclusion, I provided a convincing scenario envisioning an adaptive radiation of weakly electric fish triggered by sexual selection via assortative mating due to differences in EOD characteristics, but caused by a divergent selection of morphological traits correlated with the feeding ecology.
The first goal of the present work focuses on the need for different rationing methods of the The Global Change and Financial Transition (GFT) work- ing group at the Potsdam Institute for Climate Impact Research (PIK): I provide a toolbox which contains a variety of rationing methods to be ap- plied to micro-economic disequilibrium models of the lagom model family. This toolbox consists of well known rationing methods, and of rationing methods provided specifically for lagom. To ensure an easy application the toolbox is constructed in modular fashion. The second goal of the present work is to present a micro-economic labour market where heterogenous labour suppliers experience consecu- tive job opportunities and need to decide whether to apply for employ- ment. The labour suppliers are heterogenous with respect to their qualifi- cations and their beliefs about the application behaviour of their competi- tors. They learn simultaneously – in Bayesian fashion – about their individ- ual perceived probability to obtain employment conditional on application (PPE) by observing each others’ application behaviour over a cycle of job opportunities.
Content: 1. Objectives 2. Sociohistorical Background 2.1. The Cornish 2.2. The Welsh 2.3. The Bretons 3. Characteristics of the Brythonic Naming System 3.1. Type 1 Names: Patronymic Lineage 3.2. Type 2 Names: Geographic Origin or Place of Residence 3.3. Type 3 Names: Occupational Activities (Generally Linked to Peasantry) 3.4. Type 4 Names: Physical Characteristics, Moral Flaws 3.5. Type 5 Names: Epithets Relating to Character, Titles of Nobility, etc. 3.6. Epithets Containing References to Victory, War, Warriors, Weapons 3.7. Epithets Containing References to Courage, Strength, Impetuousness and War-like Animals 3.8. Epithets Containing References to Honorific Titles, Noble Lineage, Social Status and Aristocratic Values 4. Summary
Contents: Part I: Symplectic Geometry Chapter 1: Symplectic Spaces and Lagrangian Planes Chapter 2: The Symplectic Group Chapter 3: Multi-Oriented Symplectic Geometry Chapter 4: Intersection Indices in Lag(n) and Sp(n) Part II: Heisenberg Group, Weyl Calculus, and Metaplectic Representation Chapter 5: Lagrangian Manifolds and Quantization Chapter 6: Heisenberg Group and Weyl Operators Chapter 7: The Metaplectic Group Part III: Quantum Mechanics in Phase Space Chapter 8: The Uncertainty Principle Chapter 9: The Density Operator Chapter 10: A Phase Space Weyl Calculus
Do institutions matter?
(2006)
Contens 1 Introduction 2 Institutions and the Institutional Change 2.1 Institutions and Theoretical Concepts in Economics 2.2 Path Dependence 2.3 Inconsistence of Institutional Development 2.4 Determinants of Effectiveness 2.5 Efficiency of New Institutions 3 What is “Competition Policy”? 4 The Competition Policy in Russia as an Institution 4.1 Establishment of the Competition Policy as an Institution 4.2 Market Structure and Competition Policy 4.3 Measures of Competition Policy 4.3.1 Prohibition of Competition Restrictive Agreements or Concerted Actions 4.3.2 Abuse of Dominance 4.3.3 Merger Control 4.3.4 Restrictive Action to Competition of Administrative Bodies 4.4 Violations of the Competition Law 4.5 Problems of the Russian Competition Policy 5 Which Mistakes Russia has made with the Implementation of theCompetition Policy? 6 Is a Lacking Effectiveness of Transplanted Institutions Inevitable? 7 Concluding Remarks
This Thesis was devoted to the study of the coupled system composed by El Niño/Southern Oscillation and the Annual Cycle. More precisely, the work was focused on two main problems: 1. How to separate both oscillations into an affordable model for understanding the behaviour of the whole system. 2. How to model the system in order to achieve a better understanding of the interaction, as well as to predict future states of the system. We focused our efforts in the Sea Surface Temperature equations, considering that atmospheric effects were secondary to the ocean dynamics. The results found may be summarised as follows: 1. Linear methods are not suitable for characterising the dimensionality of the sea surface temperature in the tropical Pacific Ocean. Therefore they do not help to separate the oscillations by themselves. Instead, nonlinear methods of dimensionality reduction are proven to be better in defining a lower limit for the dimensionality of the system as well as in explaining the statistical results in a more physical way [1]. In particular, Isomap, a nonlinear modification of Multidimensional Scaling methods, provides a physically appealing method of decomposing the data, as it substitutes the euclidean distances in the manifold by an approximation of the geodesic distances. We expect that this method could be successfully applied to other oscillatory extended systems and, in particular, to meteorological systems. 2. A three dimensional dynamical system could be modeled, using a backfitting algorithm, for describing the dynamics of the sea surface temperature in the tropical Pacific Ocean. We observed that, although there were few data points available, we could predict future behaviours of the coupled ENSO-Annual Cycle system with an accuracy of less than six months, although the constructed system presented several drawbacks: few data points to input in the backfitting algorithm, untrained model, lack of forcing with external data and simplification using a close system. Anyway, ensemble prediction techniques showed that the prediction skills of the three dimensional time series were as good as those found in much more complex models. This suggests that the climatological system in the tropics is mainly explained by ocean dynamics, while the atmosphere plays a secondary role in the physics of the process. Relevant predictions for short lead times can be made using a low dimensional system, despite its simplicity. The analysis of the SST data suggests that nonlinear interaction between the oscillations is small, and that noise plays a secondary role in the fundamental dynamics of the oscillations [2]. A global view of the work shows a general procedure to face modeling of climatological systems. First, we should find a suitable method of either linear or nonlinear dimensionality reduction. Then, low dimensional time series could be extracted out of the method applied. Finally, a low dimensional model could be found using a backfitting algorithm in order to predict future states of the system.
If we want to compare the explanatory and descriptive adequacy of the MP and OT, the original definitions by Chomsky (1964) are or little direct use. However, a relativized version of both notions can be defined, which can be used to express a number of parallels between the study of individual I-languages and the language faculty. In any version of explanatory and descriptive adequacy, the two notions derive from the research programme and can only be achieved together. They can therefore not be used to characterize the difference in orientation between OT and the MP. Even if ‘OT’ is restricted to a particular theory in Chomskyan linguistics (to the exclusion of, for instance, its use in LFG), it cannot be said to be stronger in descriptive adequacy than in explanatory adequacy in the technical sense of these terms.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
The paper presents an in-depth study of focus marking in Gùrùntùm, a West Chadic language spoken in Bauchi Province of Northern Nigeria. Focus in Gùrùntùm is marked morphologically by means of a focus marker a, which typically precedes the focus constituent. Even though the morphological focus-marking system of Gùrùntùm allows for a lot of fine-grained distinctions in information structure (IS) in principle, the language is not entirely free of focus ambiguities that arise as the result of conflicting IS- and syntactic requirements that govern the placement of focus markers. We show that morphological focus marking with a applies across different types of focus, such as newinformation, contrastive, selective and corrective focus, and that a does not have a second function as a perfectivity marker, as is assumed in the literature. In contrast, we show at the end of the paper that a can also function as a foregrounding device at the level of discourse structure.
We study elliptic boundary value problems in a wedge with additional edge conditions of trace and potential type. We compute the (difference of the) number of such conditions in terms of the Fredholm index of the principal edge symbol. The task will be reduced to the case of special opening angles, together with a homotopy argument.