Refine
Year of publication
- 2006 (840) (remove)
Document Type
- Article (581)
- Doctoral Thesis (82)
- Monograph/Edited Volume (54)
- Conference Proceeding (32)
- Postprint (26)
- Preprint (24)
- Review (18)
- Other (17)
- Master's Thesis (3)
- Working Paper (2)
Language
- English (840) (remove)
Keywords
- climate change (4)
- polyelectrolyte (4)
- Fluoreszenz-Resonanz-Energie-Transfer (3)
- Immunoassay (3)
- Nanopartikel (3)
- Nichtlineare Dynamik (3)
- Optimality Theory (3)
- Polyelektrolyt (3)
- metabolite profiling (3)
- metabolomics (3)
Institute
- Institut für Physik und Astronomie (159)
- Institut für Biochemie und Biologie (128)
- Institut für Chemie (81)
- Institut für Geowissenschaften (73)
- Institut für Mathematik (71)
- Institut für Informatik und Computational Science (55)
- Department Linguistik (46)
- Extern (37)
- Department Psychologie (36)
- Institut für Anglistik und Amerikanistik (34)
- Wirtschaftswissenschaften (32)
- Institut für Umweltwissenschaften und Geographie (24)
- Institut für Ernährungswissenschaft (23)
- Institut für Romanistik (17)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (11)
- Sozialwissenschaften (10)
- Department Sport- und Gesundheitswissenschaften (5)
- Institut für Germanistik (5)
- Institut für Jüdische Studien und Religionswissenschaft (5)
- Historisches Institut (4)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- WeltTrends e.V. Potsdam (3)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (2)
- Department Erziehungswissenschaft (1)
- Hasso-Plattner-Institut für Digital Engineering GmbH (1)
- Klassische Philologie (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Strukturbereich Kognitionswissenschaften (1)
- Öffentliches Recht (1)
China, as being the largest foreign direct investment (FDI) host country in the world and the leading developing country in terms of volume of FDI inflows, has been increasingly attracting international attention from companies and policy makers. As more and more German manufacturing companies move into China, the investment is becoming larger in size and of higher quality. In the meantime, issues of the motives and nature of German FDI in China and related technological activities are developed to a more important topic for both Chinese and overseas researchers. This paper aims at the analysis and explanation of FDI movement driven by German companies in China and the role of technology hereby. Our research includes a literature review, a database analysis and a mail survey on German firms investing in China. Different indicators suggest that the motives for German FDI are long-term based and are deeply market-oriented, which can be characterised through seeking new markets and enlarging market shares. Technology transfer is therefore mainly dedicated to production and managerial facilities.
Forum: EU-Diplomatie im Jahre 2020
Metabolism of the dietary lignan secoisolariciresinol diglucoside by human intestinal bacteria
(2006)
Two experiments investigated the acceptability of multiple questions. As expected, sentences violating the Superiority Condition were accepted less often than sentences obeying it. The status of the Superiority violations was not improved by the addition of a third wh, regardless of whether the third wh was an adjunct or an argument, though it was improved by the addition of a second question (e.g., and when). Further, in a small pilot study directly comparing a sentence with adjacent final wh-phrases that may induce a stress clash (I'd like to know who hid it where when) with a sentence violating Superiority but avoiding the final adjacent wh-phrases (I'd like to know where who hid it when), half the participants indicated that the Superiority violation sentence sounded better. This suggests that the status of some additional-wh sentences may appear to improve simply because the comparison sentence with adjacent final wh-phrases is degraded. Overall, the results of the studies suggest that there is no need to complicate syntactic theory to account for the additional-wh effect, because there is no general additional-wh effect
The emergence of drug resistance remains one of the most challenging issues in the treatment of HIV-1 infection. The extreme replication dynamics of HIV facilitates its escape from the selective pressure exerted by the human immune system and by the applied combination drug therapy. This article reviews computational methods whose combined use can support the design of optimal antiretroviral therapies based on viral genotypic and phenotypic data. Genotypic assays are based on the analysis of mutations associated with reduced drug susceptibility, but are difficult to interpret due to the numerous mutations and mutational patterns that confer drug resistance. Phenotypic resistance or susceptibility can be experimentally evaluated by measuring the inhibition of the viral replication in cell culture assays. However, this procedure is expensive and time consuming
Elliptic equations on configurations W = W-1 boolean OR (. . .) boolean OR W-N with edge Y and components W-j of different dimension can be treated in the frame of pseudo-differential analysis on manifolds with geometric singularities, here edges. Starting from edge-degenerate operators on Wj, j = 1, . . . , N, we construct an algebra with extra 'transmission' conditions on Y that satisfy an analogue of the Shapiro-Lopatinskij condition. Ellipticity refers to a two-component symbolic hierarchy with an interior and an edge part; the latter one is operator- valued, operating on the union of different dimensional model cones. We construct parametrices within our calculus, where exchange of information between the various components is encoded in Green and Mellin operators that are smoothing on WY. Moreover, we obtain regularity of solutions in weighted edge spaces with asymptotics
The arid Puna plateau of the southern Central Andes is characterized by Cenozoic distributed shortening forming intramontane basins that are disconnected from the humid foreland because of the defeat of orogen-traversing channels. Thick Tertiary and Quaternary sedimentary fills in Puna basins have reduced topographic contrasts between the compressional basins and ranges, leading to a typical low-relief plateau morphology. Structurally identical basins that are still externally drained straddle the eastern border of the Puna and document the eastward propagation of orographic barriers and ensuing aridification. One of them, the Angastaco basin, is transitional between the highly compartmentalized Puna highlands and the undeformed Andean foreland. Sandstone petrography, structural and stratigraphic analysis, combined with detrital apatite fission-track thermochronology from a similar to 6200-m-thick Miocene to Pliocene stratigraphic section in the Angastaco basin, document the late Eocene to late Pliocene exhumation history of source regions along the eastern border of the Puna (Eastern Cordillera (EC)) as well as the construction of orographic barriers along the southeastern flank of the Central Andes. Onset of exhumation of a source in the EC in late Eocene time as well as a rapid exhumation of the Sierra de Luracatao (in the EC) at about 20 Ma are recorded in the detrital sediments of the Angastaco basin. Sediment accumulation in the basin began similar to 15 Ma, a time at which the EC had already built sufficient topography to prevent Puna sourced detritus from reaching the basin. After similar to 13 Ma, shortening shifted eastward, exhuming ranges that preserve an apatite fission-track partial annealing zone recording cooling during the late Cretaceous rifting event. Facies changes and fossil content suggest that after 9 Ma, the EC constituted an effective orographic barrier that prevented moisture penetration into the plateau. Between 3.4 and 2.4 Ma, another orographic barrier was uplifted to the east, leading to further aridification and pronounced precipitation gradients along the mountain front. This study emphasizes the important role of tectonics in the evolution of climate in this part of the Andes
We investigate the relationship between the gap between the energy of the ground state and the first excited state and the decay of correlation functions in harmonic lattice systems. We prove that in gapped systems, the exponential decay of correlations follows for both the ground state and thermal states. Considering the converse direction, we show that an energy gap can follow from algebraic decay and always does for exponential decay. The underlying lattices are described as general graphs of not necessarily integer dimension, including translationally invariant instances of cubic lattices as special cases. Any local quadratic couplings in position and momentum coordinates are allowed for, leading to quasi-free ( Gaussian) ground states. We make use of methods of deriving bounds to matrix functions of banded matrices corresponding to local interactions on general graphs. Finally, we give an explicit entanglement-area relationship in terms of the energy gap for arbitrary, not necessarily contiguous regions on lattices characterized by general graphs
We demonstrate that the entropy of entanglement and the distillable entanglement of regions with respect to the rest of a general harmonic-lattice system in the ground or a thermal state scale at most as the boundary area of the region. This area law is rigorously proven to hold true in noncritical harmonic-lattice systems of arbitrary spatial dimension, for general finite-ranged harmonic interactions, regions of arbitrary shape, and states of nonzero temperature. For nearest-neighbor interactions-corresponding to the Klein-Gordon case-upper and lower bounds to the degree of entanglement can be stated explicitly for arbitrarily shaped regions, generalizing the findings of Phys. Rev. Lett. 94, 060503 (2005). These higher-dimensional analogs of the analysis of block entropies in the one-dimensional case show that under general conditions, one can expect an area law for the entanglement in noncritical harmonic many-body systems. The proofs make use of methods from entanglement theory, as well as of results on matrix functions of block- banded matrices. Disordered systems are also considered. We moreover construct a class of examples for which the two- point correlation length diverges, yet still an area law can be proven to hold. We finally consider the scaling of classical correlations in a classical harmonic system and relate it to a quantum lattice system with a modified interaction. We briefly comment on a general relationship between criticality and area laws for the entropy of entanglement
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
Effects of frequency, predictability, and position of words on event-related potentials were assessed during word-by-word sentence reading in 48 subjects in an early and in a late time window corresponding to P200 and N400. Repeated measures multiple regression analyses revealed a P200 effect in the high-frequency range also the P200 was larger on words at the beginning and end of sentences than on words in the middle of sentences (i.e., a quadratic effect of word position). Predictability strongly affected the N400 component; the effect was stronger for low than for high- frequency words. The P200 frequency effect indicates that high-frequency words are lexically accessed very fast, independent of context information. Effects on the N400 suggest that predictability strongly moderates the late access especially of low-frequency words. Thus, contextual facilitation on the N400 appears to reflect both lexical and post- lexical stages of word recognition, questioning a strict classification into lexical and post-lexical processes.
Reversible assembly of the V0V1 holoenzyme from V-0 and V-1 subcomplexes is a widely used mechanism for regulation of vacuolar-type H+-ATPases (V-ATPases) in animal cells. in the blowfly (Calliphora vicina) salivary gland, V- ATPase is located in the apical membrane of the secretory cells and energizes the secretion of a KCl-rich saliva in response to the hormone serotonin. We have examined whether the CAMP pathway, known to be activated by serotonin, controls V-ATPase assembly and activity. Fluorescence measurements of pH changes at the luminal surface of isolated glands demonstrate that CAMP, Sp-adenosine-3',5'-cyclic monophosphorothioate, or forskolin, similar to serotonin, cause V-ATPase-dependent luminal acidification. In addition, V-ATPase-dependent ATP hydrolysis increases upon treatment with these agents. Immunofluorescence microscopy and pelleting assays have demonstrated further that V, components become translocated from the cytoplasm to the apical membrane and V-ATPase holoenzymes are assembled at the apical membrane during conditions that increase intracellular cAMP. Because these actions occur without a change in cytosolic Ca2+, our findings suggest that the cAMP pathway mediates the reversible assembly and activation of V-ATPase molecules at the apical membrane upon hormonal stimulus
Experimental evidence of anomalous phase synchronization in two diffusively coupled Chua oscillators
(2006)
We study the transition to phase synchronization in two diffusively coupled, nonidentical Chua oscillators. In the experiments, depending on the used parameterization, we observe several distinct routes to phase synchronization, including states of either in-phase, out-of-phase, or antiphase synchronization, which may be intersected by an intermediate desynchronization regime with large fluctuations of the frequency difference. Furthermore, we report the first experimental evidence of an anomalous transition to phase synchronization, which is characterized by an initial enlargement of the natural frequency difference with coupling strength. This results in a maximal frequency disorder at intermediate coupling levels, whereas usual phase synchronization via monotonic decrease in frequency difference sets in only for larger coupling values. All experimental results are supported by numerical simulations of two coupled Chua models
Plastidial phosphorylase is required for normal starch synthesis in Chlamydomonas reinhardtii
(2006)
Among the three distinct starch phosphorylase activities detected in Chlamydomonas reinhardtii, two distinct plastidial enzymes (PhoA and PhoB) are documented while a single extraplastidial form (PhoC) displays a higher affinity for glycogen as in vascular plants. The two plastidial phosphorylases are shown to function as homodimers containing two 91-kDa (PhoA) subunits and two 110-kDa (PhoB) subunits. Both lack the typical 80-amino-acid insertion found in the higher plant plastidial forms. PhoB is exquisitely sensitive to inhibition by ADP-glucose and has a low affinity for malto-oligosaccharides. PhoA is more similar to the higher plant plastidial phosphorylases: it is moderately sensitive to ADP-glucose inhibition and has a high affinity for unbranched malto-oligosaccharides. Molecular analysis establishes that STA4 encodes PhoB. Chlamydomonas reinhardtii strains carrying mutations at the STA4 locus display a significant decrease in amounts of starch during storage that correlates with the accumulation of abnormally shaped granules containing a modified amylopectin structure and a high amylose content. The wild-type phenotype could be rescued by reintroduction of the cloned wild-type genomic DNA, thereby demonstrating the involvement of phosphorylase in storage starch synthesis.
Quantification of discrete pressure-temperature domains in deformed chlorite + white mica-bearing metapelites was undertaken on mineral compositions derived by two-dimensional microprobe compositional mapping of selected areas of rock thin sections. In order to achieve compositional information at sufficient analytical precision, spatial resolution and sample coverage within a typical analysis time of 1 day, an optimization of measurement methods was necessary. The method presented here allows collection of raw counts for eight different element concentrations at an analytical precision of similar to 1-2 wt%. X-ray intensity multiplane maps (one map per measured chemical element) are translated into concentration multiplane maps, utilizing selected conventionally measured spot analyses combined with the Castaing approximation for each mineral. As this step requires identification of the different minerals present in the mapped area, a statistical clustering technique to identify different groups of composition was developed, guided by simple petrographic inspection of the thin section, to delineate the important minerals in the mapped area. Finally, the compositions of each pixel are translated into a mineral structural formula thus yielding a new kind of image with a high content of petrological information. The reliability of the mineral composition images was emphasized by carrying out precision tests on the analytical data. The possible use of chemical maps to infer the P-T-deformation history of metamorphic rocks is illustrated with two samples from the Spitzbergen and the Sambagawa blueschist facies belts. In both samples, a strong correlation between structures and chemistry is observed. Qualitative estimates of P-T conditions from the Si-content of mica and chlorite are in good agreement with their location in microstructures that formed at different times. Therefore, the combination of chemical maps with microstructural observations is a very powerful approach to understand both the evolution of complex metamorphic rocks and the control by deformation of mineral reactivity.
A methodology is presented to assess the impact of reservoir silting oil water availability for semiarid environments, applied to seven representative watersheds in the state of Ceara, Brazil. Water yield is computed using stochastic modelling for several reliability levels and water yield reduction is quantified for the focus areas. The yield-volume elasticity concept, which indicates the relative yield reduction in terms of relative storage capacity of the reservoirs, is presented and applied. Results chow that storage capacity was reduced by 0.2% year(-1) due to silting, that the risk of water shortage almost doubled in less than 50 years for the most critical reservoir, and that reduction of storage capacity had three times more impact oil yield reduction than the increase in evaporation. Average 90% reliable yield-volume elasticity was 0.8, which means that the global water yield (Q(90)) in Ceara is expected to diminish yearly by 388 L s(-1) due to reservoir silting
In his short paper of 1886, the neogrammarian linguist Delbruck sketches his views on normal language processing and their relevance for the interpretation of some of the symptoms of progressive anomic aphasia. In particular, he discusses proper name impairments, verb and abstract noun superiority and the predominance of semantically related errors. Furthermore, he suggests that part of speech, morphology and word order may be preserved in this condition. This historical document has been lost in oblivion but the original ideas and their relevance for contemporary discussions merit a revival.
[ 1] For the Puna Plateau and Eastern Cordillera of NW Argentina, the temporal and spatial pattern of deformation and surface uplift remain poorly constrained. Analysis of completely and partially reset apatite fission track samples collected from vertical profiles along an ESE trending transect extending from the plateau interior across the southern Eastern Cordillera at similar to 25 degrees S reveals important constraints on the deformation and exhumation history of this part of the Andes. The data constrain the Neogene Andean development of the Eastern Cordillera as well as rift-related exhumation for some of the sampled locations in the Late Jurassic/Early Cretaceous. An intervening Eocene-Oligocene exhumation episode in the southern Eastern Cordillera was probably related to crustal shortening. Subsequent reburial of the area by Andean foreland basin strata commenced between 30 and 25 Myr. Magnitude and duration of sedimentation, revealed by thermal modeling, differ between the sample locations, pointing to an eastward propagating basin system. In the southern Eastern Cordillera, Andean deformation commenced at 22.5 - 21 Myr, predating both the inferred formation of significant topography by 5 - 7.5 Myr and preservation of sediments in the adjacent Cenozoic basins by 6.5 - 8 Myr. Comparing the calculated structural depth of partially reset samples suggests that newly formed west dipping reverse faults along the former Salta Rift margin accommodated most of the Neogene tectonic movement. Late Cenozoic deformation at the southern Eastern Cordillera began earlier in the west and subsequently propagated eastward. The lateral growth of the orogen is coupled with a foreland basin system developing in front of the range and then becomes subsequently compartmentalized by later emergent topography.
An Extended Query language for action languages (and its application to aggregates and preferences)
(2006)
We report here for the first time on surface immobilization of hollow faceted polyhedrons formed from catanionic surfactant mixtures. We find that electrostatic interaction with the substrate dominates their adhesion behavior. Using polyelectrolyte coated surfaces with tailored charge densities, polyhedrons can thus be immobilized without complete spreading, which allows for further study of their mechanical properties using AFM force measurements. The elastic response of individual polyhedrons can be locally resolved, showing pronounced differences in stiffness between faces and vertexes of the structure, which makes these systems interesting as models for structurally similar colloidal scale objects such as viruses, where such effects are predicted but cannot be directly observed due to the smaller dimensions. Elastic constants of the wall material are estimated using shell and plate deformation models and are found to be a factor of 5 larger than those for neutral lipidic bilayers in the gel state. We discuss the molecular origins of this high stiffness
Hypersubstitutions are mappings which map operation symbols to terms. Terms can be visualized by trees. Hypersubstitutions can be extended to mappings defined on sets of trees. The nodes of the trees, describing terms, are labelled by operation symbols and by colors, i.e. certain positive integers. We are interested in mappings which map differently-colored operation symbols to different terms. In this paper we extend the theory of hypersubstitutions and solid varieties to multi-hypersubstitutions and colored solid varieties. We develop the interconnections between such colored terms and multihypersubstitutions and the equational theory of Universal Algebra. The collection of all varieties of a given type forms a complete lattice which is very complex and difficult to study; multi-hypersubstitutions and colored solid varieties offer a new method to study complete sublattices of this lattice.
It is shown that an elliptic scattering operator A on a compact manifold with boundary with operator valued coefficients in the morphisms of a bundle of Banach spaces of class (HT ) and Pisier’s property (α) has maximal regularity (up to a spectral shift), provided that the spectrum of the principal symbol of A on the scattering cotangent bundle avoids the right half-plane. This is accomplished by representing the resolvent in terms of pseudodifferential operators with R-bounded symbols, yielding by an iteration argument the R-boundedness of λ(A − λ)−1 in R(λ)≥ τ for some τ ∈ IR. To this end, elements of a symbolic and operator calculus of pseudodifferential operators with R-bounded symbols are introduced. The significance of this method for proving maximal regularity results for partial differential operators is underscored by considering also a more elementary situation of anisotropic elliptic operators on Rd with operator valued coefficients.
The nature of the periplastidial pathway of starch biosynthesis was investigated with the model cryptophyte Guillardia theta. The storage polysaccharide granules were shown to be composed of both amylose and amylopectin fractions with a chain length distribution and crystalline organization very similar to those of starch from green algae and land plants. Most starch granules displayed a shape consistent with biosynthesis occurring around the pyrenoid through the rhodoplast membranes. A protein with significant similarity to the amylose-synthesizing granule-bound starch syntbase 1 from green plants was found as the major polypeptide bound to the polysaccharide matrix. N-terminal sequencing of the mature protein proved that the precursor protein carries a nonfunctional transit peptide in its bipartite topogenic signal sequence which is cleaved without yielding transport of the enzyme across the two inner plastid membranes. The enzyme was shown to display similar affinities for ADP and UDP-glucose, while the V-max measured with UDP-glucose was twofold higher. The granule-bound starch synthase from Guillardia theta was demonstrated to be responsible for the synthesis of long glucan chains and therefore to be the functional equivalent of the amylose- synthesizing enzyme of green plants. Preliminary characterization of the starch pathway suggests that Guillardia theta utilizes a UDP-glucose-based pathway to synthesize starch
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
We introduce a method for computing instantaneous-polarization attributes from multicomponent signals. This is an improvement on the standard covariance method (SCM) because it does not depend on the window size used to compute the standard covariance matrix. We overcome the window-size problem by deriving an approximate analytical formula for the cross-energy matrix in which we automatically and adaptively determine the time window. The proposed method uses polarization analysis as applied to multicomponent seismic by waveform separation and filtering.
Characterization of polarization attributes of seismic waves using continuous wavelet transforms
(2006)
Complex-trace analysis is the method of choice for analyzing polarized data. Because particle motion can be represented by instantaneous attributes that show distinct features for waves of different polarization characteristics, it can be used to separate and characterize these waves. Traditional methods of complex-trace analysis only give the instantaneous attributes as a function of time or frequency. However. for transient wave types or seismic events that overlap in time, an estimate of the polarization parameters requires analysis of the time-frequency dependence of these attributes. We propose a method to map instantaneous polarization attributes of seismic signals in the wavelet domain and explicitly relate these attributes with the wavelet-transform coefficients of the analyzed signal. We compare our method with traditional complex-trace analysis using numerical examples. An advantage of our method is its possibility of performing the complete wave-mode separation/ filtering process in the wavelet domain and its ability to provide the frequency dependence of ellipticity, which contains important information on the subsurface structure. Furthermore, using 2-C synthetic and real seismic shot gathers, we show how to use the method to separate different wave types and identify zones of interfering wave modes
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
Experimental results show that the polymerization of pyrrole in the presence of beta-naphthalenesulfonic acid and different fluorosurfactants like perfluorooctanesulfonic acid, perfluorooctyldiethanolamide, and ammonium perfluorooctanoate leads to polypyrrole with special morphologies, such as rings or disks and rectangular frames or plates. The formation of these unusually shaped particles of polymer dispersions is explained by the chemical and colloidal peculiarities of the oxidative pyrrole polymerization with ammonium peroxodisulfate in aqueous medium.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
We present a patient with primary progressive aphasia who showed no problems dealing with a variety of semantic tasks for simple nouns and numerical material. However, massive impairments became apparent in all lexical input and output tasks for non-number words, whereas peripheral processing was demonstrated to be intact. Interestingly, no parallel impairment was observed for numerals. This is the first case study reporting an isolated sparing of number words at the level of lexical processing in all four modalities.
When Galactic microlensing events of stars are observed, one usually measures a symmetric light curve corresponding to a single lens, or an asymmetric light curve, often with caustic crossings, in the case of a binary lens system. In principle, the fraction of binary stars at a certain separation range can be estimated based on the number of measured microlensing events. However, a binary system may produce a light curve which can be fitted well as a single lens light curve, in particullary if the data sampling is poor and the errorbars are large. We investigate what fraction of microlensing events produced by binary stars for different separations may be well fitted by and hence misinterpreted as single lens events for various observational conditions. We find that this fraction strongly depends on the separation of the binary components, reaching its minimum at between 0.6 and 1.0 Einstein radius, where it is still of the order of 5% The Einstein radius is corresponding to few A.U. for typical Galactic microlensing scenarios. The rate for misinterpretation is higher for short microlensing events lasting up to few months and events with smaller maximum amplification. For fixed separation it increases for binaries with more extreme mass ratios. Problem of degeneracy in photometric light curve solution between binary lens and binary source microlensing events was studied on simulated data, and data observed by the PLANET collaboration. The fitting code BISCO using the PIKAIA genetic algorithm optimizing routine was written for optimizing binary-source microlensing light curves observed at different sites, in I, R and V photometric bands. Tests on simulated microlensing light curves show that BISCO is successful in finding the solution to a binary-source event in a very wide parameter space. Flux ratio method is suggested in this work for breaking degeneracy between binary-lens and binary-source photometric light curves. Models show that only a few additional data points in photometric V band, together with a full light curve in I band, will enable breaking the degeneracy. Very good data quality and dense data sampling, combined with accurate binary lens and binary source modeling, yielded the discovery of the lowest-mass planet discovered outside of the Solar System so far, OGLE-2005-BLG-390Lb, having only 5.5 Earth masses. This was the first observed microlensing event in which the degeneracy between a planetary binary-lens and an extreme flux ratio binary-source model has been successfully broken. For events OGLE-2003-BLG-222 and OGLE-2004-BLG-347, the degeneracy was encountered despite of very dense data sampling. From light curve modeling and stellar evolution theory, there was a slight preference to explain OGLE-2003-BLG-222 as a binary source event, and OGLE-2004-BLG-347 as a binary lens event. However, without spectra, this degeneracy cannot be fully broken. No planet was found so far around a white dwarf, though it is believed that Jovian planets should survive the late stages of stellar evolution, and that white dwarfs will retain planetary systems in wide orbits. We want to perform high precision astrometric observations of nearby white dwarfs in wide binary systems with red dwarfs in order to find planets around white dwarfs. We selected a sample of observing targets (WD-RD binary systems, not published yet), which can possibly have planets around the WD component, and modeled synthetic astrometric orbits which can be observed for these targets using existing and future astrometric facilities. Modeling was performed for the astrometric accuracy of 0.01, 0.1, and 1.0 mas, separation between WD and planet of 3 and 5 A.U., binary system separation of 30 A.U., planet masses of 10 Earth masses, 1 and 10 Jupiter masses, WD mass of 0.5M and 1.0 Solar masses, and distances to the system of 10, 20 and 30 pc. It was found that the PRIMA facility at the VLTI will be able to detect planets around white dwarfs once it is operating, by measuring the astrometric wobble of the WD due to a planet companion, down to 1 Jupiter mass. We show for the simulated observations that it is possible to model the orbits and find the parameters describing the potential planetary systems.
The separation of natural and anthropogenically caused climatic changes is an important task of contemporary climate research. For this purpose, a detailed knowledge of the natural variability of the climate during warm stages is a necessary prerequisite. Beside model simulations and historical documents, this knowledge is mostly derived from analyses of so-called climatic proxy data like tree rings or sediment as well as ice cores. In order to be able to appropriately interpret such sources of palaeoclimatic information, suitable approaches of statistical modelling as well as methods of time series analysis are necessary, which are applicable to short, noisy, and non-stationary uni- and multivariate data sets. Correlations between different climatic proxy data within one or more climatological archives contain significant information about the climatic change on longer time scales. Based on an appropriate statistical decomposition of such multivariate time series, one may estimate dimensions in terms of the number of significant, linear independent components of the considered data set. In the presented work, a corresponding approach is introduced, critically discussed, and extended with respect to the analysis of palaeoclimatic time series. Temporal variations of the resulting measures allow to derive information about climatic changes. For an example of trace element abundances and grain-size distributions obtained near the Cape Roberts (Eastern Antarctica), it is shown that the variability of the dimensions of the investigated data sets clearly correlates with the Oligocene/Miocene transition about 24 million years before present as well as regional deglaciation events. Grain-size distributions in sediments give information about the predominance of different transportation as well as deposition mechanisms. Finite mixture models may be used to approximate the corresponding distribution functions appropriately. In order to give a complete description of the statistical uncertainty of the parameter estimates in such models, the concept of asymptotic uncertainty distributions is introduced. The relationship with the mutual component overlap as well as with the information missing due to grouping and truncation of the measured data is discussed for a particular geological example. An analysis of a sequence of grain-size distributions obtained in Lake Baikal reveals that there are certain problems accompanying the application of finite mixture models, which cause an extended climatological interpretation of the results to fail. As an appropriate alternative, a linear principal component analysis is used to decompose the data set into suitable fractions whose temporal variability correlates well with the variations of the average solar insolation on millenial to multi-millenial time scales. The abundance of coarse-grained material is obviously related to the annual snow cover, whereas a significant fraction of fine-grained sediments is likely transported from the Taklamakan desert via dust storms in the spring season.
Motivated by the successful Karlsruhe dynamo experiment, a relatively low-dimensional dynamo model is proposed. It is based on a strong truncation of the magnetohydrodynamic (MHD) equations with an external forcing of the Roberts type and the requirement that the model system satisfies the symmetries of the full MHD system, so that the first symmetry-breaking bifurcations can be captured. The backbone of the Roberts dynamo is formed by the Roberts flow, a helical mean magnetic field and another part of the magnetic field coupled to these two by triadic mode interactions. A minimum truncation model (MTM) containing only these energetically dominating primary mode triads is fully equivalent to the widely used first-order smoothing approximation. However, it is shown that this approach works only in the limit of small wave numbers of the excited magnetic field or small magnetic Reynolds numbers ($Rm ll 1$). To obtain dynamo action under more general conditions, secondary mode
The goal of a Brain-Computer Interface (BCI) consists of the development of a unidirectional interface between a human and a computer to allow control of a device only via brain signals. While the BCI systems of almost all other groups require the user to be trained over several weeks or even months, the group of Prof. Dr. Klaus-Robert Müller in Berlin and Potsdam, which I belong to, was one of the first research groups in this field which used machine learning techniques on a large scale. The adaptivity of the processing system to the individual brain patterns of the subject confers huge advantages for the user. Thus BCI research is considered a hot topic in machine learning and computer science. It requires interdisciplinary cooperation between disparate fields such as neuroscience, since only by combining machine learning and signal processing techniques based on neurophysiological knowledge will the largest progress be made. In this work I particularly deal with my part of this project, which lies mainly in the area of computer science. I have considered the following three main points: <b>Establishing a performance measure based on information theory:</b> I have critically illuminated the assumptions of Shannon's information transfer rate for application in a BCI context. By establishing suitable coding strategies I was able to show that this theoretical measure approximates quite well to what is practically achieveable. <b>Transfer and development of suitable signal processing and machine learning techniques:</b> One substantial component of my work was to develop several machine learning and signal processing algorithms to improve the efficiency of a BCI. Based on the neurophysiological knowledge that several independent EEG features can be observed for some mental states, I have developed a method for combining different and maybe independent features which improved performance. In some cases the performance of the combination algorithm outperforms the best single performance by more than 50 %. Furthermore, I have theoretically and practically addressed via the development of suitable algorithms the question of the optimal number of classes which should be used for a BCI. It transpired that with BCI performances reported so far, three or four different mental states are optimal. For another extension I have combined ideas from signal processing with those of machine learning since a high gain can be achieved if the temporal filtering, i.e., the choice of frequency bands, is automatically adapted to each subject individually. <b>Implementation of the Berlin brain computer interface and realization of suitable experiments:</b> Finally a further substantial component of my work was to realize an online BCI system which includes the developed methods, but is also flexible enough to allow the simple realization of new algorithms and ideas. So far, bitrates of up to 40 bits per minute have been achieved with this system by absolutely untrained users which, compared to results of other groups, is highly successful.
Combined optimization of spatial and temporal filters for improving brain-computer interfacing
(2006)
Brain-computer interface (BCI) systems create a novel communication channel from the brain to an output de ice by bypassing conventional motor output pathways of nerves and muscles. Therefore they could provide a new communication and control option for paralyzed patients. Modern BCI technology is essentially based on techniques for the classification of single-trial brain signals. Here we present a novel technique that allows the simultaneous optimization of a spatial and a spectral filter enhancing discriminability rates of multichannel EEG single-trials. The evaluation of 60 experiments involving 22 different subjects demonstrates the significant superiority of the proposed algorithm over to its classical counterpart: the median classification error rate was decreased by 11%. Apart from the enhanced classification, the spatial and/or the spectral filter that are determined by the algorithm can also be used for further analysis of the data, e.g., for source localization of the respective brain rhythms.
In this paper, two sets of earthquake ground-motion relations to estimate peak ground and response spectral acceleration are developed for sites in southern Spain and in southern Norway using a recently published composite approach. For this purpose seven empirical ground-motion relations developed from recorded strong-motion data from different parts of the world were employed. The different relations were first adjusted based on a number of transformations to convert the differing choices of independent parameters to a single one. After these transformations, which include the scatter introduced, were performed, the equations were modified to account for differences between the host and the target regions using the stochastic method to compute the host-to-target conversion factors. Finally functions were fitted to the derived ground-motion estimates to obtain sets of seven individual equations for use in probabilistic seismic hazard assessment for southern Spain and southern Norway. The relations are compared with local ones published for the two regions. The composite methodology calls for the setting up of independent logic trees for the median values and for the sigma values, in order to properly separate epistemic and aleatory uncertainties after the corrections and the conversions
Diagnosis and repair of negative polarity constructions in the light of symbolic resonance analysis
(2006)
in a post hoc analysis, we investigate differences in event-related potentials of two studies (Drenhaus et al., 2004, to appear; Saddy et al., 2004) by using the symbolic resonance analysis (Beim Graben & Kurths, 2003). The studies under discussion, examined the failure to license a negative polarity item (NPI) in German: Saddy et al. (2004a) reported an N400 component when the NPI was not accurately licensed by negation; Drenhaus et al. (2004, to appear) considered additionally the influence of constituency of the licensor in NPI constructions. A biphasic N400-P600 response was found for the two induced violations (the lack of licensor and the inaccessibility of negation in a relative clause). The symbolic resonance analysis (SRA) revealed an effect in the P600 time window for the data in Saddy et al., which was not found by using the averaging technique. The SRA of the ERPs in Drenhaus et al., showed that the P600 components are distinguishable concerning the amplitude and latency. It was smaller and earlier in the condition where the licensor is inaccessible, compared to the condition without negation in the string. Our findings suggest that the failure in licensing NPIs is not exclusively related to semantic integration costs (N400). The elicited P600 components reflect differences in syntactic processing. Our results confirm and replicate the effects of the traditional voltage average analysis and show that the SRA is a useful tool to reveal and pull apart ERP differences which are not evident using the traditional voltage average analysis.
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
Use of large Acacia trees by the cavity dwelling Black-tailed Tree Rat in the southern Kalahari
(2006)
Recent extensive harvesting of large, often dead Acacia trees in and savanna of southern Africa is cause for concern about the conservation status of the arid savanna and its animal community. We mapped vegetation and nests of the Black-tailed Tree Rat Thallomy's nigricauda to assess the extent to which the rats depend on particular tree species and on the existence of dead, standing trees. The study was conducted in continuous Acacia woodland on the southern and eastern edge of the Kalahari, South Africa. Trees in which there were tree rat nests were compared with trees of similar size and vigour to identify the characteristics of nest sites. Spatial analysis of tree rat distribution was conducted using Ripley's-L function. We found that T nigricauda was able to utilize all available tree species, as long as trees were large and old enough so that cavities were existing inside the stem. The spatial distribution of nest trees did not show clumping at the investigated scale, and we therefore reject the notion of the rats forming colonies when inhabiting continuous woodlands. The selection of a particular tree as a nest site was furthermore depending on the close proximity of the major food plant, Acacia mellifera. This may limit the choice of suitable nest sites. since A. mellifera was less likely to grow within a vegetation patch containing a large trees than in patches without large trees.