Refine
Has Fulltext
- yes (5955) (remove)
Year of publication
Document Type
- Postprint (2346)
- Doctoral Thesis (1732)
- Article (644)
- Preprint (425)
- Monograph/Edited Volume (246)
- Conference Proceeding (185)
- Working Paper (165)
- Master's Thesis (59)
- Habilitation Thesis (39)
- Part of Periodical (26)
Language
- English (5955) (remove)
Keywords
- climate change (73)
- Klimawandel (50)
- machine learning (40)
- morphology (40)
- information structure (39)
- MOOC (37)
- syntax (37)
- e-learning (36)
- digital education (35)
- Curriculum Framework (34)
Institute
- Institut für Physik und Astronomie (640)
- Institut für Biochemie und Biologie (556)
- Mathematisch-Naturwissenschaftliche Fakultät (485)
- Institut für Mathematik (475)
- Institut für Geowissenschaften (471)
- Extern (458)
- Institut für Chemie (427)
- Department Linguistik (237)
- Humanwissenschaftliche Fakultät (207)
- Department Psychologie (205)
Islamic movements in Iran
(2004)
The modernist Islamic Movement sought to reconcile modern values and Islamic faith and attempted to express these values through an Islamic discourse and to reform political, religious and educational institutions along modernist lines. However, such a movement in the Islamic Republic of Iran raised controversy among the traditional leadership and secular intellectual groups. The aim of this paper is to discuss how far modernist Islam could progress in an islamic republic with an old tradition.
The ultimate aim of this study is to better understand the relevance of weak electricity in the adaptive radiation of the African mormyrid fish. The chosen model taxon, the genus Campylomormyrus, exhibits a wide diversity of electric organ discharge (EOD) waveform types. Their EOD is age, sex, and species specific and is an important character for discriminating among species that are otherwise cryptic. After having established a complementary set of molecular markers, I examined the radiation of Campylomormyrus by a combined approach of molecular data (sequence data from the mitochondrial cytochrome b and the nuclear S7 ribosomal protein gene, as well as 18 microsatellite loci, especially developed for the genus Campylomormyrus), observation of ontogeny and diversification of EOD waveform, and morphometric analysis of relevant morphological traits. I built up the first convincing phylogenetic hypothesis for the genus Campylomormyrus. Taking advantage of microsatellite data, the identified phylogenetic clades proved to be reproductively isolated biological species. This way I detected at least six species occurring in sympatry near Brazzaville/Kinshasa (Congo Basin). By combining molecular data and EOD analyses, I could show that there are three cryptic species, characterised by their own adult EOD types, hidden under a common juvenile EOD form. In addition, I confirmed that adult male EOD is species-specific and is more different among closely related species than among more distantly related ones. This result and the observation that the EOD changes with maturity suggest its function as a reproductive isolation mechanism. As a result of my morphometric shape analysis, I could assign species types to the identified reproductively isolated groups to produce a sound taxonomy of the group. Besides this, I could also identify morphological traits relevant for the divergences between the identified species. Among them, the variations I found in the shape of the trunk-like snout, suggest the presence of different trophic specializations; therefore, this trait might have been involved in the ecological radiation of the group. In conclusion, I provided a convincing scenario envisioning an adaptive radiation of weakly electric fish triggered by sexual selection via assortative mating due to differences in EOD characteristics, but caused by a divergent selection of morphological traits correlated with the feeding ecology.
Quantum dots (QDs) are common as luminescing markers for imaging in biological applications because their optical properties seem to be inert against their surrounding solvent. This, together with broad and strong absorption bands and intense, sharp tuneable luminescence bands, makes them interesting candidates for methods utilizing Förster Resonance Energy Transfer (FRET), e. g. for sensitive homogeneous fluoroimmunoassays (FIA). In this work we demonstrate energy transfer from Eu<SUP>3+</SUP>-trisbipyridin (Eu-TBP) donors to CdSe-ZnS-QD acceptors in solutions with and without serum. The QDs are commercially available CdSe-ZnS core-shell particles emitting at 655 nm (QD655). The FRET system was achieved by the binding of the streptavidin conjugated donors with the biotin conjugated acceptors. After excitation of Eu-TBP and as result of the energy transfer, the luminescence of the QD655 acceptors also showed lengthened decay times like the donors. The energy transfer efficiency, as calculated from the decay times of the bound and the unbound components, amounted to 37%. The Förster-radius, estimated from the absorption and emission bands, was ca. 77 Å. The effective binding ratio, which not only depends on the ratio of binding pairs but also on unspecific binding, was obtained from the donor emission dependent on the concentration. As serum promotes unspecific binding, the overall FRET efficiency of the assay was reduced. We conclude that QDs are good substitutes for acceptors in FRET if combined with slow decay donors like Europium. The investigation of the influence of the serum provides guidance towards improving binding properties of QD assays.
Inhalt: Grundgedanken zur Entwicklung von Leitbildern -Leitbilder im Kontext eines Stadtmarketingkonzeptes -Ein Modell zur Entwicklung von Leitbildern -Das Leitbild als ein Element der Entwicklung eines Stadtmarketing- Konzepts -Funktion von Leitbildern -Anforderungen an Leitbilder Beispiele zur Leitbildentwicklung für die Städte Hennigsdorf und Potsdam
Inhalt: Introduction: -Some Introductory Examples -Consumer-relevant Utility Dimensions -Communication Flow between the Relevant Actors -Risk Communication Dimensions -Complete Model -Aims of the Study Method: -Participants -Procedure -Content Analysis Results: -Sample Category 1: Food safety -Sample Category 2: Product Quality -Sample Category 3: Freedom of Choice -Sample Category 4: Decision Power over Foodstuffs -Strategy 1: Scientific Information Approach -Strategy 2: Balanced Information Approach -Strategy 3: Product Information Approach -Strategy 4: Classical Advertising -Strategy 5: Trust me I'm no Baddie -Strategy 6: Induction of Fear
In view of the importance of charge storage in polymer electrets for electromechanical transducer applications, the aim of this work is to contribute to the understanding of the charge-retention mechanisms. Furthermore, we will try to explain how the long-term storage of charge carriers in polymeric electrets works and to identify the probable trap sites. Charge trapping and de-trapping processes were investigated in order to obtain evidence of the trap sites in polymeric electrets. The charge de-trapping behavior of two particular polymer electrets was studied by means of thermal and optical techniques. In order to obtain evidence of trapping or de-trapping, charge and dipole profiles in the thickness direction were also monitored. In this work, the study was performed on polyethylene terephthalate (PETP) and on cyclic-olefin copolymers (COCs). PETP is a photo-electret and contains a net dipole moment that is located in the carbonyl group (C = O). The electret behavior of PETP arises from both the dipole orientation and the charge storage. In contrast to PETP, COCs are not photo-electrets and do not exhibit a net dipole moment. The electret behavior of COCs arises from the storage of charges only. COC samples were doped with dyes in order to probe their internal electric field. COCs show shallow charge traps at 0.6 and 0.11 eV, characteristic for thermally activated processes. In addition, deep charge traps are present at 4 eV, characteristic for optically stimulated processes. PETP films exhibit a photo-current transient with a maximum that depends on the temperature with an activation energy of 0.106 eV. The pair thermalization length (rc) calculated from this activation energy for the photo-carrier generation in PETP was estimated to be approx. 4.5 nm. The generated photo-charge carriers can recombine, interact with the trapped charge, escape through the electrodes or occupy an empty trap. PETP possesses a small quasi-static pyroelectric coefficient (QPC): ~0.6 nC/(m²K) for unpoled samples, ~60 nC/(m²K) for poled samples and ~60 nC/(m²K) for unpoled samples under an electric bias (E ~10 V/µm). When stored charges generate an internal electric field of approx. 10 V/µm, they are able to induce a QPC comparable to that of the oriented dipoles. Moreover, we observe charge-dipole interaction. Since the raw data of the QPC-experiments on PETP samples is noisy, a numerical Fourier-filtering procedure was applied. Simulations show that the data analysis is reliable when the noise level is up to 3 times larger than the calculated pyroelectric current for the QPC. PETP films revealed shallow traps at approx. 0.36 eV during thermally-stimulated current measurements. These energy traps are associated with molecular dipole relaxations (C = O). On the other hand, photo-activated measurements yield deep charge traps at 4.1 and 5.2 eV. The observed wavelengths belong to the transitions in PETP that are analogous to the π - π* benzene transitions. The observed charge de-trapping selectivity in the photocharge decay indicates that the charge detrapping is from a direct photon-charge interaction. Additionally, the charge de-trapping can be facilitated by photo-exciton generation and the interaction of the photo-excitons with trapped charge carriers. These results indicate that the benzene rings (C6H4) and the dipolar groups (C = O) can stabilize and share an extra charge carrier in a chemical resonance. In this way, this charge could be de-trapped in connection with the photo-transitions of the benzene ring and with the dipole relaxations. The thermally-activated charge release shows a difference in the trap depth to its optical counterpart. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. That is, the processes of charge detrapping from shallow traps are related to secondary forces. The processes of charge de-trapping from deep traps are related to primary forces. Furthermore, the presence of deep trap levels causes the stability of the charge for long periods of time.
What can we learn from climate data? : Methods for fluctuation, time/scale and phase analysis
(2006)
Since Galileo Galilei invented the first thermometer, researchers have tried to understand the complex dynamics of ocean and atmosphere by means of scientific methods. They observe nature and formulate theories about the climate system. Since some decades powerful computers are capable to simulate the past and future evolution of climate. Time series analysis tries to link the observed data to the computer models: Using statistical methods, one estimates characteristic properties of the underlying climatological processes that in turn can enter the models. The quality of an estimation is evaluated by means of error bars and significance testing. On the one hand, such a test should be capable to detect interesting features, i.e. be sensitive. On the other hand, it should be robust and sort out false positive results, i.e. be specific. This thesis mainly aims to contribute to methodological questions of time series analysis with a focus on sensitivity and specificity and to apply the investigated methods to recent climatological problems. First, the inference of long-range correlations by means of Detrended Fluctuation Analysis (DFA) is studied. It is argued that power-law scaling of the fluctuation function and thus long-memory may not be assumed a priori but have to be established. This requires to investigate the local slopes of the fluctuation function. The variability characteristic for stochastic processes is accounted for by calculating empirical confidence regions. The comparison of a long-memory with a short-memory model shows that the inference of long-range correlations from a finite amount of data by means of DFA is not specific. When aiming to infer short memory by means of DFA, a local slope larger than $\alpha=0.5$ for large scales does not necessarily imply long-memory. Also, a finite scaling of the autocorrelation function is shifted to larger scales in the fluctuation function. It turns out that long-range correlations cannot be concluded unambiguously from the DFA results for the Prague temperature data set. In the second part of the thesis, an equivalence class of nonstationary Gaussian stochastic processes is defined in the wavelet domain. These processes are characterized by means of wavelet multipliers and exhibit well defined time dependent spectral properties; they allow one to generate realizations of any nonstationary Gaussian process. The dependency of the realizations on the wavelets used for the generation is studied, bias and variance of the wavelet sample spectrum are calculated. To overcome the difficulties of multiple testing, an areawise significance test is developed and compared to the conventional pointwise test in terms of sensitivity and specificity. Applications to Climatological and Hydrological questions are presented. The thesis at hand mainly aims to contribute to methodological questions of time series analysis and to apply the investigated methods to recent climatological problems. In the last part, the coupling between El Nino/Southern Oscillation (ENSO) and the Indian Monsoon on inter-annual time scales is studied by means of Hilbert transformation and a curvature defined phase. This method allows one to investigate the relation of two oscillating systems with respect to their phases, independently of their amplitudes. The performance of the technique is evaluated using a toy model. From the data, distinct epochs are identified, especially two intervals of phase coherence, 1886-1908 and 1964-1980, confirming earlier findings from a new point of view. A significance test of high specificity corroborates these results. Also so far unknown periods of coupling invisible to linear methods are detected. These findings suggest that the decreasing correlation during the last decades might be partly inherent to the ENSO/Monsoon system. Finally, a possible interpretation of how volcanic radiative forcing could cause the coupling is outlined.
Public pensions in the U.S.
(2005)
Contents: The Public Old Age Insurance of the U.S. -Historical overview -Technical details -Individual equity and social adequacy The Economic Problem of Old Age -Risks and economic security -Old age, retirement, and idividual precaution -Insurance markets, market failures, and social insurance -Options for public pension systems The Problems of Social Security -The financial balance of OASDI -Causes of the long-run problems -Rates of return -Conclusion - The case for Social Security reform Proposed Remedies -Full, partial, or no privatization? -The President's Commission to Strengthen Social Security -Kotlikoff's Personal Security System -The Diamond-Orszag Three-Part plan
Many European countries have experienced a significant increase of unemployment in recent years. This paper reviews several theoretical models that try to explain this phenomenon. Predominantly, these models claim a link between the poor performance of European labor markets and the high level of market regulation. Commonly referred to as the Eurosclerosis debate, prominent approaches consider insider-outsider relationships, search-models, and the influence of hiring and firing costs on equilibrium employment. The paper presents empirical evidence of each model and studies the relevance of the identified rigidities as a determinant of high unemployment in Europe. Furthermore, a case study analyzes the unemployment problem in Germany and critically discusses new reform efforts. In particular this section analyzes whether the recently enacted Hartz reforms can induce higher employment.
Revisiting public investment
(2004)
The consumption equivalence method is the theoretical basis of public cost-benefit analysis. Consumption equivalence public capital prices are explicitly introduces in order to sufficiently care for the opportunity cost of public expenditure. This can solve the dispute about the social rate of discount within public cost-benefit analysis witch was generated on a criterion looking similar to the capital value formula, known as Lind’s approach. The social rate of discount is liberated from opportunity costs considerations and the discounting away of the effects for future welfare vanishes. The corresponding question whether one should accept a positive value of the pure rate of social time preference is an old issue. Its current state between the prescriptive and descriptive view can also be interpreted as a consequence of the oversimplification of standard cost– benefit analysis. But apart from an economic self-process the pure rate of social time preference is also defined as a business-as-usual value of social distance discounting. Hence, a political choice has to be made about this rate which is free in principal.
An exhaustive and disjoint decomposition of social choice situations is derived in a general set theoretical framework using the new tools of the Lifted Pareto relation on the power set of social states representing a pre-choice comparison of choice option sets. The main result is the classification of social choice situations which include three types of social choice problems. First, we usually observe the common incompleteness of the Pareto relation. Second, a kind of non-compactness problem of a choice set of social states can be generated. Finally, both can be combined. The first problem root can be regarded as natural everyday dilemma of social choice theory whereas the second may probably be much more due to modeling technique implications. The distinction is enabled at a very general set theoretical level. Hence, the derived classification of social choice situations is applicable on almost every relevant economic model.
Contents: Actors, Markets and Interest Groups in Health Services Private and Social Health Insurance in a Simple Model Misallocation and Malpractice in Social Health Care and Insurances -The UK Health Care System -The German Social Health Insurance System -Current Discussions: Intertemporal Perspective and Fundamental Change Interplay of Public and Private Health Insurance: Lessons for Countries in Transition Summary: The Necessary Steps to a Fundamental Reform
Contents: Targets, Means and Benefits of Social Protection Standard Risks and Possible Institutional Settings for Social Protection -Market Structure for Pension and Health Insurance -Systems of Social Protection and Security -Replacement Ratios and Income Taxation Social Protection in Selected European Countries: Germany, Austria, The Netherlands, United Kingdom -Pension System -Health System -Unemployment Insurance -Accident Insurance -Basic Security System -Taxation of Wages and Profits The Overall Burden of Taxes and Social Protection Expenses Necessary Reforms, Lessons for Russia and a Basic Approach for a Blueprint -Basic Features of the Reform Process -Reforms within the Branches of Social Protection -Integrated Tax and Transfer Reform -Empirical Evaluation of Tax and Transfer Reforms
The polit-economic situation in germany : chances for changes in resource and energy economics
(2002)
Contents: Regional Management, Land Use and Energy Production -Biophysical View -First Hypothesis -International and Interregional Cooperation -Second Hypothesis -Partnership with Nature Sustainability and the Agricultural Sector -Traditional Farming -Mono-cultural Bio-industry -Liquid Manure Problems -Clean Drinking Water -Integrated Agro-industrial System -Ecological Farming -Ecotones and Bio-manipulation Regional Economic and Agricultural Policy -New Roles for the Agricultural Sector
Face-to-face communication is multimodal. In unscripted spoken discourse we can observe the interaction of several "semiotic layers", modalities of information such as syntax, discourse structure, gesture, and intonation. We explore the role of gesture and intonation in structuring and aligning information in spoken discourse through a study of the co-occurrence of pitch accents and gestural apices. Metaphorical spatialization through gesture also plays a role in conveying the contextual relationships between the speaker, the government and other external forces in a naturally-occurring political speech setting.
The present study examines native and nonnative perceptual processing of semantic information conveyed by prosodic prominence. Five groups of German learners of English each listened to one of 5 experimental conditions. Three conditions differed in place of focus accent in the sentence and two conditions were with spliced stimuli. The experiment condition was presented first in the learners’ L1 (German) and then in a similar set in the L2 (English). The effect of the accent condition and of the length and position of the target in the sentence was evaluated in a probe recognition task. In both the L1 and L2 tasks there was no significant effect in any of the five focus conditions. Target position and target word length had an effect in the L1 task. Word length did not affect accuracy rates in the L2 task. For probe recognition in the L2, word length and the position of the target interacted with the focus condition.
This paper investigates the structural properties of morphosyntactically marked focus constructions, focussing on the often neglected non-focal sentence part in African tone languages. Based on new empirical evidence from five Gur and Kwa languages, we claim that these focus expressions have to be analysed as biclausal constructions even though they do not represent clefts containing restrictive relative clauses. First, we relativize the partly overgeneralized assumptions about structural correspondences between the out-of-focus part and relative clauses, and second, we show that our data do in fact support the hypothesis of a clause coordinating pattern as present in clause sequences in narration. It is argued that we deal with a non-accidental, systematic feature and that grammaticalization may conceal such basic narrative structures.
The Semantics of Ellipsis
(2005)
There are four phenomena that are particularly troublesome for theories of ellipsis: the existence of sloppy readings when the relevant pronouns cannot possibly be bound; an ellipsis being resolved in such a way that an ellipsis site in the antecedent is not understood in the way it was there; an ellipsis site drawing material from two or more separate antecedents; and ellipsis with no linguistic antecedent. These cases are accounted for by means of a new theory that involves copying syntactically incomplete antecedent material and an analysis of silent VPs and NPs that makes them into higher order definite descriptions that can be bound into.
Stop bashing givenness!
(2005)
Elke Kasimir’s paper (in this volume) argues against employing the notion of Givenness in the explanation of accent assignment. I will claim that the arguments against Givenness put forward by Kasimir are inconclusive because they beg the question of the role of Givenness. It is concluded that, more generally, arguments against Givenness as a diagnostic for information structural partitions should not be accepted offhand, since the notion of Givenness of discourse referents is (a) theoretically simple, (b) readily observable and quantifiable, and (c) bears cognitive significance.
In order to investigate the empirical properties of focus, it is necessary to diagnose focus (or: "what is focused") in particular linguistic examples. It is often taken for granted that the application of one single diagnostic tool, the so-called question-answer test, which roughly says that whatever a question asks for is focused in the answer, is a fool-proof test for focus. This paper investigates one example class where such uncritical belief in the question-answer test has led to the assumption of rather complex focus projection rules: in these examples, pitch accent placement has been claimed to depend on certain parts of the focused constituents being given or not. It is demonstrated that such focus projection rules are unnecessarily complex and in turn require the assumption of unnecessarily complicated meaning rules, not to speak of the difficulties to give a precise semantic/pragmatic definition of the allegedly involved givenness property. For the sake of the argument, an alternative analysis is put forward which relies solely on alternative sets following Mats Rooth's work, and avoids any recourse to givenness. As it turns out, this alternative analysis is not only simpler but also makes in a critical case the better predictions.
We present a system for the linguistic exploration and analysis of lexical cohesion in English texts. Using an electronic thesaurus-like resource, Princeton WordNet, and the Brown Corpus of English, we have implemented a process of annotating text with lexical chains and a graphical user interface for inspection of the annotated text. We describe the system and report on some sample linguistic analyses carried out using the combined thesaurus-corpus resource.
This paper discusses the use of XSLT stylesheets as a filtering mechanism for refining the results of user queries on treebanks. The discussion is within the context of the TIGER treebank, the associated search engine and query language, but the general ideas can apply to any search engine for XML-encoded treebanks. It will be shown that important classes of linguistic phenomena can be accessed by applying relatively simple XSLT templates to the output of a query, effectively simulating the universal quantifier for a subset of the query language. uni-potsdam.de/cgi-bin/publika/view.pl?id=206">
Fronting of an infinite VP across a finite main verb-akin to German "VP-topicalization"-can be found also in Czech and Polish. The paper discusses evidence from large corpora for this process and some of its properties, both syntactic and information-structural. Based on this case, criteria for more user-friedly searching and retrieval of corpus data in syntactic research are being developed.
Multiple hierarchies
(2005)
In this paper, we present the Multiple Annotation approach, which solves two problems: the problem of annotating overlapping structures, and the problem that occurs when documents should be annotated according to different, possibly heterogeneous tag sets. This approach has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. The files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) are described. These representations serve as a base for several applications.
This paper describes the standardization problems that come up in a diachronic corpus: it has to cope with differing standards with regard to diplomaticity, annotation, and header information. Such highly het-erogeneous texts must be standardized to allow for comparative re-search without (too much) loss of information.
Unity in diversity
(2005)
This paper describes the creation and preparation of TUSNELDA, a collection of corpus data built for linguistic research. This collection contains a number of linguistically annotated corpora which differ in various aspects such as language, text sorts / data types, encoded annotation levels, and linguistic theories underlying the annotation. The paper focuses on this variation on the one hand and the way how these heterogeneous data are integrated into one resource on the other hand.
The concepts of food deficit, hunger, undernourishment and food security are discussed. Axioms and indices for the assessment of nutrition of individuals and groups are suggested. Furthermore a measure for food aid donor performance is developed and applied to a sample of bilateral and multilateral donors providing food aid for African countries.
The paper is an enquiry into dynamic social contract theory. The social contract defines the rules of resource use. An intergenerational social contract in an economy with a single exhaustible resource is examined within a framework of an overlapping generations model. It is assumed that new generations do not accept the old social contract, and access to resources will be renegotiated between any incumbent generation and their successors. It turns out that later generations will be in an unfortunate position regardless of their bargaining power.
In modern political philosophy social contract theory is the most prominent approach to individual rights and fair institutions. According to social contract theory the system of rights in a society ought to be justified by reconstructing its basic features as a contract between the mutually unconcerned members of society. This paper explores whether social contract theory can successfully be applied to justify rights of future generations. Three competing views are analysed: Rawls's theory of justice, Hobbes's radical liberalism and Gauthier's bargaining framework based on the Lockean proviso.
The value concept of traditional resource economics is welfare. Therefore, sustainability of welfare is often taken to characterise our obligations to future generations. This paper argues that this view is inappropriate because it leaves no room for future generations autonomy. Future generations should be free to make their own decisions. Consequently freedom of choice is the appropriate value concept on which resource economics should be based. The concept of sustainability receives a new interpretation. Sustainability is a principle of intertemporal distributive justice which requires equitable opportunities across generations.
ANNIS
(2004)
In this paper, we discuss the design and implementation of our first version of the database "ANNIS" ("ANNotation of Information Structure"). For research based on empirical data, ANNIS provides a uniform environment for storing this data together with its linguistic annotations. A central database promotes standardized annotation, which facilitates interpretation and comparison of the data. ANNIS is used through a standard web browser and offers tier-based visualization of data and annotations, as well as search facilities that allow for cross-level and cross-sentential queries. The paper motivates the design of the system, characterizes its user interface, and provides an initial technical evaluation of ANNIS with respect to data size and query processing.
Focus strategies in chadic
(2004)
We argue that the standard focus theories reach their limits when confronted with the focus systems of the Chadic languages. The backbone of the standard focus theories consists of two assumptions, both called into question by the languages under consideration. Firstly, it is standardly assumed that focus is generally marked by stress. The Chadic languages, however, exhibit a variety of different devices for focus marking. Secondly, it is assumed that focus is always marked. In Tangale, at least, focus is not marked consistently on all types of constituents. The paper offers two possible solutions to this dilemma.
In this paper we review the current state of research on the issue of discourse structure (DS)/information structure (IS) interface. This field has received a lot of attention from discourse semanticists and pragmatists, and has made substantial progress in recent years. In this paper we summarize the relevant studies. In addition, we look at the issue of DS/ISinteraction at a different level - that of phonetics. It is known that both information structure and discourse structure can be realized prosodically, but the issue of phonetic interaction between the prosodic devices they employ has hardly ever been discussed in this context. We think that a proper consideration of this aspect of DS/IS-interaction would enrich our understanding of the phenomenon, and hence we formulate some related research-programmatic positions.
Prosody by phase
(2004)
Japanese wh-questions always exhibit focus intonation (FI). Furthermore, the domain of FI exhibits a correspondence to the wh-scope. I propose that this phonology-semantics correspondence is a result of the cyclic computation of FI, which is explained under the notion of Multiple Spell-Out in the recent Minimalist framework. The proposed analysis makes two predictions: (1) embedding of an FI into another is possible; (2) (overt) movement of a wh-phrase to a phase edge position causes a mismatch between FI and wh-scope. Both predictions are tested experimentally, and shown to be borne out.
We argue that there is a crucial difference between determiner and adverbial quantification. Following Herburger [2000] and von Fintel [1994], we assume that determiner quantifiers quantify over individuals and adverbial quantifiers over eventualities. While it is usually assumed that the semantics of sentences with determiner quantifiers and those with adverbial quantifiers basically come out the same, we will show by way of new data that quantification over events is more restricted than quantification over individuals. This is because eventualities in contrast to individuals have to be located in time which is done using contextual information according to a pragmatic resolution strategy. If the contextual information and the tense information given in the respective sentence contradict each other, the sentence is uninterpretable. We conclude that this is the reason why in these cases adverbial quantification, i.e. quantification over eventualities, is impossible whereas quantification over individuals is fine.
This paper investigates the nature of the attraction of XPs to clauseinitial position in German (and other languages). It argues that there are two different types of preposing. First, an XP can move when it is attracted by an EPP-like feature of Comp. Comp can, however, also attract elements that bear the formal marker of some semantic or pragmatic (information theoretic) function. This second type of movement is driven by the attraction of a formal property of the moved element. It has often been misanalysed as “operator” movement in the past. Japanese wh-questions always exhibit focus intonation (FI). Furthermore, the domain of FI exhibits a correspondence to the wh-scope. I propose that this phonology-semantics correspondence is a result of the cyclic computation of FI, which is explained under the notion of Multiple Spell-Out in the recent Minimalist framework. The proposed analysis makes two predictions: (1) embedding of an FI into another is possible; (2) (overt) movement of a wh-phrase to a phase edge position causes a mismatch between FI and wh-scope. Both predictions are tested experimentally, and shown to be borne out.
Results of a production experiment on the placement of sentence accent in German are reported. The hypothesis that German fulfills some of the most widely accepted rules of accent assignment— predicting focus domain integration—was only partly confirmed. Adjacency between argument and verb induces a single accent on the argument, as recognized in the literature, but interruption of this sequence by a modifier often induces remodeling of the accent pattern with a single accent on the modifier. The verb is rarely stressed. All models based on linear alignment or adjacency between elements belonging to a single accent domain fail to account for this result. A cyclic analysis of prosodic domain formation is proposed in an optimality-theoretic framework that can explain the accent pattern. Japanese wh-questions always exhibit focus intonation (FI). Furthermore, the domain of FI exhibits a correspondence to the wh-scope. I propose that this phonology-semantics correspondence is a result of the cyclic computation of FI, which is explained under the notion of Multiple Spell-Out in the recent Minimalist framework. The proposed analysis makes two predictions: (1) embedding of an FI into another is possible; (2) (overt) movement of a wh-phrase to a phase edge position causes a mismatch between FI and wh-scope. Both predictions are tested experimentally, and shown to be borne out.
Der vorliegende dritte Band der Serie "Interdisciplinary Studies on Information Structure" enthält sieben Beiträge aus verschiedenen Projekten des Sonderforschungsbereiches "Informationsstruktur: Die sprachlichen Mittel der Gliederung von Äußerung, Satz und Text" (SFB 632). Der Titel "Approaches and Findings in Oral, Written and Gestural Language" reflektiert die Bandbreite der Untersuchungen zum Thema Informationsstruktur. In ihrem Artikel hinterfragt Elke Kasimir die Zuverlässigkeit des sog. Frage-Antwort-Tests zur Bestimmung des fokussierten Elementes in Sätzen. Ihr alternativer Lösungsvorschlag wird in dem Kommentar von Thomas Weskott kritisch diskutiert. Der Artikel von Paul Elbourne befasst sich mit Phänomenen der Ellipse und bietet eine neue semantische Analyse an. Spezielle morphologisch stark markierte Fokuskonstruktionen aus fünf verschiedenen afrikanischen Sprachen der Gur- und Kwa-Sprachgruppe werden von Ines Fiedler und Anne Schwarz analysiert und diachronisch interpretiert. Ebenfalls sprachhistorisch ausgerichtet ist der Artikel von Roland Hinterhölzl, Svetlana Petrova und Michael Solf, die Belege für die Interaktion von Wortstellung und Informationsstruktur bereits in der althochdeutschen Tatian-Übersetzung fanden. Anke Sennema, Ruben van de Vijver, Susanne E. Carroll und Anne Zimmer-Stahl diskutieren anhand einer Serie von Experimenten die Nutzung von Prosodie, Wortlänge und –Stellung für die semantischen Interpretation in der Erst- und Zweitsprache. Die besondere Rolle von Gestik in Verbindung mit Intonation für die Strukturierung des sprachlichen Diskurses wird von Stefanie Jannedy und Norma Mendoza-Denton hervorgehoben.
The papers in this volume were presented at the workshop Heterogeneity in Linguistic Databases', which took place on July 9, 2004 at the University of Potsdam. The workshop was organized by project D1: Linguistic Database for Information Structure: Annotation and Retrieval', a member project of the SFB 632, a collaborative research center entitled Information Structure: the Linguistic Means for Structuring Utterances, Sentences and Texts'. The workshop brought together both developers and users of linguistic databases from a number of research projects which work on an empirical basis, all of which have to cope with different sorts of heterogeneity: primary linguistic data and annotated information may be heterogeneous, as well as the data structures representing them. The first four papers (by Wagner, Schmidt, Lüdeling, and Witt) address aspects of heterogeneous data from the point of view of database developers; the remaining three papers (by Meyer, Smith, and Teich/Fankhauser) focus on data exploitation by the users.
Interdisciplinary studies on information structure : ISIS ; working papers of the SFB 632. - Vol. 1
(2004)
Contents: A1: Phonology and syntax of focussing and topicalisation: Gisbert Fanselow: Cyclic Phonology–Syntax-Interaction: Movement to First Position in German Caroline Féry and Laura Herbst: German Sentence Accent Revisited Shinichiro Ishihara: Prosody by Phase: Evidence from Focus Intonation–Wh-scope Correspondence in Japanese A2: Quantification and information structure: Cornelia Endriss and Stefan Hinterwimmer: The Influence of Tense in Adverbial Quantification A3: Rhetorical Structure in Spoken Language: Modeling of Global Prosodic Parameters: Ekaterina Jasinskaja, Jörg Mayer and David Schlangen: Discourse Structure and Information Structure: Interfaces and Prosodic Realization B2: Focussing in African Tchadic languages: Katharina Hartmann and Malte Zimmermann: Focus Strategies in Chadic: The Case of Tangale Revisited D1: Linguistic database for information structure: Annotation and retrieval: Stefanie Dipper, Michael Götze, Manfred Stede and Tillmann Wegst: ANNIS: A Linguistic Database for Exploring Information Structure
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
Earthquakes form by sudden brittle failure of rock mostly as shear ruptures along a rupture plane. Beside this, mechanisms other than pure shearing have been observed for some earthquakes mainly in volcanic areas. Possible explanations include complex rupture geometries and tensile earthquakes. Tensile earthquakes occur by opening or closure of cracks during rupturing. They are likely to be often connected with fluids that cause pressure changes in the pore space of rocks leading to earthquake triggering. Tensile components have been reported for swarm earthquakes in West Bohemia in 2000. The aim and subject of this work is an assessment and the accurate determination of such tensile components for earthquakes in anisotropic media. Currently used standard techniques for the retrieval of earthquake source mechanisms assume isotropic rock properties. By means of moment tensors, equivalent forces acting at the source are used to explain the radiated wavefield. Conversely, seismic anisotropy, i.e. directional dependence of elastic properties, has been observed in the earth's crust and mantle such as in West Bohemia. In comparison to isotropy, anisotropy causes modifications in wave amplitudes and shear-wave splitting. In this work, effects of seismic anisotropy on true or apparent tensile source components of earthquakes are investigated. In addition, earthquake source parameters are determined considering anisotropy. It is shown that moment tensors and radiation patterns due to shear sources in anisotropic media may be similar to those of tensile sources in isotropic media. In contrast, similarities between tensile earthquakes in anisotropic rocks and shear sources in isotropic media may exist. As a consequence, the interpretation of tensile source components is ambiguous. The effects that are due to anisotropy depend on the orientation of the earthquake source and the degree of anisotropy. The moment of an earthquake is also influenced by anisotropy. The orientation of fault planes can be reliably determined even if isotropy instead of anisotropy is assumed and if the spectra of the compressional waves are used. Greater difficulties may arise when the spectra of split shear waves are additionally included. Retrieved moment tensors show systematic artefacts. Observed tensile source components determined for events in West Bohemia in 1997 can only partly be attributed to the effects of moderate anisotropy. Furthermore, moment tensors determined earlier for earthquakes induced at the German Continental Deep Drilling Program (KTB), Bavaria, were reinterpreted under assumptions of anisotropic rock properties near the borehole. The events can be consistently identified as shear sources, although their moment tensors comprise tensile components that are considered to be apparent. These results emphasise the necessity to consider anisotropy to uniquely determine tensile source parameters. Therefore, a new inversion algorithm has been developed, tested, and successfully applied to 112 earthquakes that occurred during the most recent intense swarm episode in West Bohemia in 2000 at the German-Czech border. Their source mechanisms have been retrieved using isotropic and anisotropic velocity models. Determined local magnitudes are in the range between 1.6 and 3.2. Fault-plane solutions are similar to each other and characterised by left-lateral faulting on steeply dipping, roughly North-South oriented rupture planes. Their dip angles decrease above a depth of about 8.4km. Tensile source components indicating positive volume changes are found for more than 60% of the considered earthquakes. Their size depends on source time and location. They are significant at the beginning of the swarm and at depths below 8.4km but they decrease in importance later in the course of the swarm. Determined principle stress axes include P axes striking Northeast and Taxes striking Southeast. They resemble those found earlier in Central Europe. However, depth-dependence in plunge is observed. Plunge angles of the P axes decrease gradually from 50° towards shallow angles with increasing depth. In contrast, the plunge angles of the T axes change rapidly from about 8° above a depth of 8.4km to 21° below this depth. By this thesis, spatial and temporal variations in tensile source components and stress conditions have been reported for the first time for swarm earthquakes in West Bohemia in 2000. They also persist, when anisotropy is assumed and can be explained by intrusion of fluids into the opened cracks during tensile faulting.
The intracontinental endorheic Aral Sea, remote from oceanic influences, represents an excellent sedimentary archive in Central Asia that can be used for high-resolution palaeoclimate studies. We performed palynological, microfacies and geochemical analyses on sediment cores retrieved from Chernyshov Bay, in the NW part of the modern Large Aral Sea. The most complete sedimentary sequence, whose total length is 11 m, covers approximately the past 2000 years of the late Holocene. High-resolution palynological analyses, conducted on both dinoflagellate cysts assemblages and pollen grains, evidenced prominent environmental change in the Aral Sea and in the catchment area. The diversity and the distribution of dinoflagellate cysts within the assemblages characterized the sequence of salinity and lake-level changes during the past 2000 years. Due to the strong dependence of the Aral Sea hydrology to inputs from its tributaries, the lake levels are ultimately linked to fluctuations in meltwater discharges during spring. As the amplitude of glacial meltwater inputs is largely controlled by temperature variations in the Tien Shan and Pamir Mountains during the melting season, salinity and lake-level changes of the Aral Sea reflect temperature fluctuations in the high catchment area during the past 2000 years. Dinoflagellate cyst assemblages document lake lowstands and hypersaline conditions during ca. 0–425 AD, 920–1230 AD, 1500 AD, 1600–1650 AD, 1800 AD and since the 1960s, whereas oligosaline conditions and higher lake levels prevailed during the intervening periods. Besides, reworked dinoflagellate cysts from Palaeogene and Neogene deposits happened to be a valuable proxy for extreme sheet-wash events, when precipitation is enhanced over the Aral Sea Basin as during 1230–1450 AD. We propose that the recorded environmental changes are related primarily to climate, but may have been possibly amplified during extreme conditions by human-controlled irrigation activities or military conflicts. Additionally, salinity levels and variations in solar activity show striking similarities over the past millennium, as during 1000–1300 AD, 1450–1550 and 1600–1700 AD when low lake levels match well with an increase in solar activity thus suggesting that an increase in the net radiative forcing reinforced past Aral Sea’s regressions. On the other hand, we used pollen analyses to quantify changes in moisture conditions in the Aral Sea Basin. High-resolution reconstruction of precipitation (mean annual) and temperature (mean annual, coldest versus warmest month) parameters are performed using the “probability mutual climatic spheres” method, providing the sequence of climate change for the past 2000 years in western Central Asia. Cold and arid conditions prevailed during ca. 0–400 AD, 900–1150 AD and 1500–1650 AD with the extension of xeric vegetation dominated by steppe elements. Conversely, warmer and less arid conditions occurred during ca. 400–900 AD and 1150–1450 AD, where steppe vegetation was enriched in plants requiring moister conditions. Change in the precipitation pattern over the Aral Sea Basin is shown to be predominantly controlled by the Eastern Mediterranean (EM) cyclonic system, which provides humidity to the Middle East and western Central Asia during winter and early spring. As the EM is significantly regulated by pressure modulations of the North Atlantic Oscillation (NAO) when the system is in a negative phase, a relationship between humidity over western Central Asia and the NAO is proposed. Besides, laminated sediments record shifts in sedimentary processes during the late Holocene that reflect pronounced changes in taphonomic dynamics. In Central Asia, the frequency of dust storms occurring during spring when the continent is heating up is mostly controlled by the intensity and the position of the Siberian High (SH) Pressure System. Using titanium (Ti) content in laminated sediments as a proxy for aeolian detrital inputs, changes in wind dynamics over Central Asia is documented for the past 1500 years, offering the longest reconstruction of SH variability to date. Based on high Ti content, stronger wind dynamics are reported from 450–700 AD, 1210–1265 AD, 1350–1750 AD and 1800–1975 AD, reporting a stronger SH during spring. In contrast, lower Ti content from 1750–1800 AD and 1980–1985 AD reflect a diminished influence of the SH and a reduced atmospheric circulation. During 1180–1210 AD and 1265–1310 AD, considerably weakened atmospheric circulation is evidenced. As a whole, though climate dynamics controlled environmental changes and ultimately modulated changes in the western Central Asia’s climate system, it is likely that changes in solar activity also had an impact by influencing to some extent the Aral Sea’s hydrology balance and also regional temperature patterns in the past. <hr> The appendix of the thesis is provided via the HTML document as ZIP download.
Advances in biotechnologies rapidly increase the number of molecules of a cell which can be observed simultaneously. This includes expression levels of thousands or ten-thousands of genes as well as concentration levels of metabolites or proteins. Such Profile data, observed at different times or at different experimental conditions (e.g., heat or dry stress), show how the biological experiment is reflected on the molecular level. This information is helpful to understand the molecular behaviour and to identify molecules or combination of molecules that characterise specific biological condition (e.g., disease). This work shows the potentials of component extraction algorithms to identify the major factors which influenced the observed data. This can be the expected experimental factors such as the time or temperature as well as unexpected factors such as technical artefacts or even unknown biological behaviour. Extracting components means to reduce the very high-dimensional data to a small set of new variables termed components. Each component is a combination of all original variables. The classical approach for that purpose is the principal component analysis (PCA). It is shown that, in contrast to PCA which maximises the variance only, modern approaches such as independent component analysis (ICA) are more suitable for analysing molecular data. The condition of independence between components of ICA fits more naturally our assumption of individual (independent) factors which influence the data. This higher potential of ICA is demonstrated by a crossing experiment of the model plant Arabidopsis thaliana (Thale Cress). The experimental factors could be well identified and, in addition, ICA could even detect a technical artefact. However, in continuously observations such as in time experiments, the data show, in general, a nonlinear distribution. To analyse such nonlinear data, a nonlinear extension of PCA is used. This nonlinear PCA (NLPCA) is based on a neural network algorithm. The algorithm is adapted to be applicable to incomplete molecular data sets. Thus, it provides also the ability to estimate the missing data. The potential of nonlinear PCA to identify nonlinear factors is demonstrated by a cold stress experiment of Arabidopsis thaliana. The results of component analysis can be used to build a molecular network model. Since it includes functional dependencies it is termed functional network. Applied to the cold stress data, it is shown that functional networks are appropriate to visualise biological processes and thereby reveals molecular dynamics.
Uncertainty about the sensitivity of the climate system to changes in the Earth’s radiative balance constitutes a primary source of uncertainty for climate projections. Given the continuous increase in atmospheric greenhouse gas concentrations, constraining the uncertainty range in such type of sensitivity is of vital importance. A common measure for expressing this key characteristic for climate models is the climate sensitivity, defined as the simulated change in global-mean equilibrium temperature resulting from a doubling of atmospheric CO2 concentration. The broad range of climate sensitivity estimates (1.5-4.5°C as given in the last Assessment Report of the Intergovernmental Panel on Climate Change, 2001), inferred from comprehensive climate models, illustrates that the strength of simulated feedback mechanisms varies strongly among different models. The central goal of this thesis is to constrain uncertainty in climate sensitivity. For this objective we first generate a large ensemble of model simulations, covering different feedback strengths, and then request their consistency with present-day observational data and proxy-data from the Last Glacial Maximum (LGM). Our analyses are based on an ensemble of fully-coupled simulations, that were realized with a climate model of intermediate complexity (CLIMBER-2). These model versions cover a broad range of different climate sensitivities, ranging from 1.3 to 5.5°C, and have been generated by simultaneously perturbing a set of 11 model parameters. The analysis of the simulated model feedbacks reveals that the spread in climate sensitivity results from different realizations of the feedback strengths in water vapour, clouds, lapse rate and albedo. The calculated spread in the sum of all feedbacks spans almost the entire plausible range inferred from a sampling of more complex models. We show that the requirement for consistency between simulated pre-industrial climate and a set of seven global-mean data constraints represents a comparatively weak test for model sensitivity (the data constrain climate sensitivity to 1.3-4.9°C). Analyses of the simulated latitudinal profile and of the seasonal cycle suggest that additional present-day data constraints, based on these characteristics, do not further constrain uncertainty in climate sensitivity. The novel approach presented in this thesis consists in systematically combining a large set of LGM simulations with data information from reconstructed regional glacial cooling. Irrespective of uncertainties in model parameters and feedback strengths, the set of our model versions reveals a close link between the simulated warming due to a doubling of CO2, and the cooling obtained for the LGM. Based on this close relationship between past and future temperature evolution, we define a method (based on linear regression) that allows us to estimate robust 5-95% quantiles for climate sensitivity. We thus constrain the range of climate sensitivity to 1.3-3.5°C using proxy-data from the LGM at low and high latitudes. Uncertainties in glacial radiative forcing enlarge this estimate to 1.2-4.3°C, whereas the assumption of large structural uncertainties may increase the upper limit by an additional degree. Using proxy-based data constraints for tropical and Antarctic cooling we show that very different absolute temperature changes in high and low latitudes all yield very similar estimates of climate sensitivity. On the whole, this thesis highlights that LGM proxy-data information can offer an effective means of constraining the uncertainty range in climate sensitivity and thus underlines the potential of paleo-climatic data to reduce uncertainty in future climate projections.
To investigate eye-movement control in reading, the present thesis examined three phenomena related to the eyes’ landing position within words, (1) the optimal viewing position (OVP), (2) the preferred viewing location (PVL), and (3) the Fixation-Duration Inverted-Optimal Viewing Position (IOVP) Effect. Based on a corpus-analytical approach (Exp. 1), the influence of variables word length, launch site distance, and word frequency was systematically explored. In addition, five experimental manipulations were conducted. First, word center was identified as the OVP, that is the position within a word where refixation probability is minimal. With increasing launch site distance, however, the OVP was found to move towards the word beginning. Several possible causes of refixations were discussed. The issue of refixation saccade programming was extensively investigated, suggesting that pre-planned and directly controlled refixation saccades coexist. Second, PVL curves, that is landing position distributions, show that the eyes are systematically deviated from the OVP, due to visuomotor constraints. By far the largest influence on mean and standard deviation of the Gaussian PVL curve was exhibited by launch site distance. Third, it was investigated how fixation durations vary as a function of landing position. The IOVP effect was replicated: Fixations located at word center are longer than those falling near the edges of a word. The effect of word frequency and/or launch site distance on the IOVP function mainly consisted in a vertical displacement of the curve. The Fixation-Duration IOVP effect is intriguing because word center (the OVP) would appear to be the best place to fixate and process a word. A critical part of the current work was devoted to investigate the origin of the effect. It was suggested that the IOVP effect arises as a consequence of mislocated fixations, i.e. fixations on unintended words, which are caused by saccadic errors. An algorithm for estimating the proportion of mislocated fixations from empirical data was developed, based on extrapolations of landing position distributions beyond word boundaries. As a new central theoretical claim it was suggested that a new saccade program is started immediately if the intended target word is missed. On average, this will lead to decreased durations for mislocated fixations. Because mislocated fixations were shown to be most prevalent at the beginning and end of words, the proposed mechanism generated the inverted U-shape for fixation durations when computed as a function of landing position. The proposed mechanism for generating the effect is generally compatible with both oculomotor and cognitive models of eye-movement control in reading.
This thesis discusses challenges in IT security education, points out a gap between e-learning and practical education, and presents a work to fill the gap. E-learning is a flexible and personalized alternative to traditional education. Nonetheless, existing e-learning systems for IT security education have difficulties in delivering hands-on experience because of the lack of proximity. Laboratory environments and practical exercises are indispensable instruction tools to IT security education, but security education in conventional computer laboratories poses particular problems such as immobility as well as high creation and maintenance costs. Hence, there is a need to effectively transform security laboratories and practical exercises into e-learning forms. In this thesis, we introduce the Tele-Lab IT-Security architecture that allows students not only to learn IT security principles, but also to gain hands-on security experience by exercises in an online laboratory environment. In this architecture, virtual machines are used to provide safe user work environments instead of real computers. Thus, traditional laboratory environments can be cloned onto the Internet by software, which increases accessibility to laboratory resources and greatly reduces investment and maintenance costs. Under the Tele-Lab IT-Security framework, a set of technical solutions is also proposed to provide effective functionalities, reliability, security, and performance. The virtual machines with appropriate resource allocation, software installation, and system configurations are used to build lightweight security laboratories on a hosting computer. Reliability and availability of laboratory platforms are covered by a virtual machine management framework. This management framework provides necessary monitoring and administration services to detect and recover critical failures of virtual machines at run time. Considering the risk that virtual machines can be misused for compromising production networks, we present a security management solution to prevent the misuse of laboratory resources by security isolation at the system and network levels. This work is an attempt to bridge the gap between e-learning/tele-teaching and practical IT security education. It is not to substitute conventional teaching in laboratories but to add practical features to e-learning. This thesis demonstrates the possibility to implement hands-on security laboratories on the Internet reliably, securely, and economically.
The layer-by-layer assembly (LBL) of polyelectrolytes has been extensively studied for the preparation of ultrathin films due to the versatility of the build-up process. The control of the permeability of these layers is particularly important as there are potential drug delivery applications. Multilayered polyelectrolyte microcapsules are also of great interest due to their possible use as microcontainers. This work will present two methods that can be used as employable drug delivery systems, both of which can encapsulate an active molecule and tune the release properties of the active species. Poly-(N-isopropyl acrylamide), (PNIPAM) is known to be a thermo-sensitive polymer that has a Lower Critical Solution Temperature (LCST) around 32oC; above this temperature PNIPAM is insoluble in water and collapses. It is also known that with the addition of salt, the LCST decreases. This work shows Differential Scanning Calorimetry (DSC) and Confocal Laser Scanning Microscopy (CLSM) evidence that the LCST of the PNIPAM can be tuned with salt type and concentration. Microcapsules were used to encapsulate this thermo-sensitive polymer, resulting in a reversible and tunable stimuli- responsive system. The encapsulation of the PNIPAM inside of the capsule was proven with Raman spectroscopy, DSC (bulk LCST measurements), AFM (thickness change), SEM (morphology change) and CLSM (in situ LCST measurement inside of the capsules). The exploitation of the capsules as a microcontainer is advantageous not only because of the protection the capsules give to the active molecules, but also because it facilitates easier transport. The second system investigated demonstrates the ability to reduce the permeability of polyelectrolyte multilayer films by the addition of charged wax particles. The incorporation of this hydrophobic coating leads to a reduced water sensitivity particularly after heating, which melts the wax, forming a barrier layer. This conclusion was proven with Neutron Reflectivity by showing the decreased presence of D2O in planar polyelectrolyte films after annealing creating a barrier layer. The permeability of capsules could also be decreased by the addition of a wax layer. This was proved by the increase in recovery time measured by Florescence Recovery After Photobleaching, (FRAP) measurements. In general two advanced methods, potentially suitable for drug delivery systems, have been proposed. In both cases, if biocompatible elements are used to fabricate the capsule wall, these systems provide a stable method of encapsulating active molecules. Stable encapsulation coupled with the ability to tune the wall thickness gives the ability to control the release profile of the molecule of interest.
Collisions of black holes and neutron stars, named mixed binaries in the following, are interesting because of at least two reasons. Firstly, it is expected that they emit a large amount of energy as gravitational waves, which could be measured by new detectors. The form of those waves is expected to carry information about the internal structure of such systems. Secondly, collisions of such objects are the prime suspects of short gamma ray bursts. The exact mechanism for the energy emission is unknown so far. In the past, Newtonian theory of gravitation and modifications to it were often used for numerical simulations of collisions of mixed binary systems. However, near to such objects, the gravitational forces are so strong, that the use of General Relativity is necessary for accurate predictions. There are a lot of problems in general relativistic simulations. However, systems of two neutron stars and systems of two black holes have been studies extensively in the past and a lot of those problems have been solved. One of the remaining problems so far has been the use of hydrodynamic on excision boundaries. Inside excision regions, no evolution is carried out. Such regions are often used inside black holes to circumvent instabilities of the numerical methods near the singularity. Methods to handle hydrodynamics at such boundaries have been described and tests are shown in this work. One important test and the first application of those methods has been the simulation of a collapsing neutron star to a black hole. The success of these simulations and in particular the performance of the excision methods was an important step towards simulations of mixed binaries. Initial data are necessary for every numerical simulation. However, the creation of such initial data for general relativistic situations is in general very complicated. In this work it is shown how to obtain initial data for mixed binary systems using an already existing method for initial data of two black holes. These initial data have been used for evolutions of such systems and problems encountered are discussed in this work. One of the problems are instabilities due to different methods, which could be solved by dissipation of appropriate strength. Another problem is the expected drift of the black hole towards the neutron star. It is shown, that this can be solved by using special gauge conditions, which prevent the black hole from moving on the computational grid. The methods and simulations shown in this work are only the starting step for a much more detailed study of mixed binary system. Better methods, models and simulations with higher resolution and even better gauge conditions will be focus of future work. It is expected that such detailed studies can give information about the emitted gravitational waves, which is important in view of the newly built gravitational wave detectors. In addition, these simulations could give insight into the processes responsible for short gamma ray bursts.
With increasing number of applications in Internet and mobile environments, distributed software systems are demanded to be more powerful and flexible, especially in terms of dynamism and security. This dissertation describes my work concerning three aspects: dynamic reconfiguration of component software, security control on middleware applications, and web services dynamic composition. Firstly, I proposed a technology named Routing Based Workflow (RBW) to model the execution and management of collaborative components and realize temporary binding for component instances. The temporary binding means component instances are temporarily loaded into a created execution environment to execute their functions, and then are released to their repository after executions. The temporary binding allows to create an idle execution environment for all collaborative components, on which the change operations can be immediately carried out. The changes on execution environment will result in a new collaboration of all involved components, and also greatly simplifies the classical issues arising from dynamic changes, such as consistency preserving etc. To demonstrate the feasibility of RBW, I created a dynamic secure middleware system - the Smart Data Server Version 3.0 (SDS3). In SDS3, an open source implementation of CORBA is adopted and modified as the communication infrastructure, and three secure components managed by RBW, are created to enhance the security on the access of deployed applications. SDS3 offers multi-level security control on its applications from strategy control to application-specific detail control. For the management by RBW, the strategy control of SDS3 applications could be dynamically changed by reorganizing the collaboration of the three secure components. In addition, I created the Dynamic Services Composer (DSC) based on Apache open source projects, Apache Axis and WSIF. In DSC, RBW is employed to model the interaction and collaboration of web services and to enable the dynamic changes on the flow structure of web services. Finally, overall performance tests were made to evaluate the efficiency of the developed RBW and SDS3. The results demonstrated that temporary binding of component instances makes slight impacts on the execution efficiency of components, and the blackout time arising from dynamic changes can be extremely reduced in any applications.
The goal of a Brain-Computer Interface (BCI) consists of the development of a unidirectional interface between a human and a computer to allow control of a device only via brain signals. While the BCI systems of almost all other groups require the user to be trained over several weeks or even months, the group of Prof. Dr. Klaus-Robert Müller in Berlin and Potsdam, which I belong to, was one of the first research groups in this field which used machine learning techniques on a large scale. The adaptivity of the processing system to the individual brain patterns of the subject confers huge advantages for the user. Thus BCI research is considered a hot topic in machine learning and computer science. It requires interdisciplinary cooperation between disparate fields such as neuroscience, since only by combining machine learning and signal processing techniques based on neurophysiological knowledge will the largest progress be made. In this work I particularly deal with my part of this project, which lies mainly in the area of computer science. I have considered the following three main points: <b>Establishing a performance measure based on information theory:</b> I have critically illuminated the assumptions of Shannon's information transfer rate for application in a BCI context. By establishing suitable coding strategies I was able to show that this theoretical measure approximates quite well to what is practically achieveable. <b>Transfer and development of suitable signal processing and machine learning techniques:</b> One substantial component of my work was to develop several machine learning and signal processing algorithms to improve the efficiency of a BCI. Based on the neurophysiological knowledge that several independent EEG features can be observed for some mental states, I have developed a method for combining different and maybe independent features which improved performance. In some cases the performance of the combination algorithm outperforms the best single performance by more than 50 %. Furthermore, I have theoretically and practically addressed via the development of suitable algorithms the question of the optimal number of classes which should be used for a BCI. It transpired that with BCI performances reported so far, three or four different mental states are optimal. For another extension I have combined ideas from signal processing with those of machine learning since a high gain can be achieved if the temporal filtering, i.e., the choice of frequency bands, is automatically adapted to each subject individually. <b>Implementation of the Berlin brain computer interface and realization of suitable experiments:</b> Finally a further substantial component of my work was to realize an online BCI system which includes the developed methods, but is also flexible enough to allow the simple realization of new algorithms and ideas. So far, bitrates of up to 40 bits per minute have been achieved with this system by absolutely untrained users which, compared to results of other groups, is highly successful.
We analyze the notions of monotonicity and complete monotonicity for Markov Chains in continuous-time, taking values in a finite partially ordered set. Similarly to what happens in discrete-time, the two notions are not equivalent. However, we show that there are partially ordered sets for which monotonicity and complete monotonicity coincide in continuous time but not in discrete-time.
In silico identification of genes regulated by abscisic acid in Arabidopsis thaliana (L.) Heynh.
(2005)
Abscisic acid (ABA) is a major plant hormone that plays an important role during plant growth and development. During vegetative growth ABA mediates (in part) responses to various environmental stresses such as cold, drought and high salinity. The response triggered by ABA includes changes in the transcript level of genes involved in stress tolerance. The aim of this project was the In silico identification of genes putatively regulated by ABA in A. thaliana. In silico predictions were combined with experimental data in order to evaluate the reliability of computational predictions. Taking advantage of the genome sequence of A. thaliana publicly available since 2000, 1 kb upstream sequences were screened for combinations of cis-elements known to be involved in the regulation of ABA-responsive genes. It was found that around 10 to 20 percent of the genes of A. thaliana might be regulated by ABA. Further analyses of the predictions revealed that certain combinations of cis-elements that confer ABA-responsiveness were significantly over-represented compared with results in random sequences and with random expectations. In addition, it was observed that other combinations that confer ABA-responsiveness in monocotyledonous species might not be functional in A. thaliana. It is proposed that ABA-responsive genes in A. thaliana show pairs of ABRE (abscisic acid responsive element) with MYB binding sites, DRE (dehydration responsive element) or with itself. The analysis of the distances between pairs of cis-elements suggested that pairs of ABREs are bound by homodimers of ABRE binding proteins. In contrast, pairs between MYB binding sites and ABRE, or DRE and ABRE showed a distance between cis-elements that suggested that the binding proteins interact through protein complexes and not directly. The comparison of computational predictions with experimental data confirmed that the regulatory mechanisms leading to the induction or repression of genes by ABA is very incompletely understood. It became evident that besides the cis-elements proposed in this study to be present in ABA-responsive genes, other known and unknown cis-elements might play an important role in the transcriptional regulation of ABA-responsive genes. For example, auxin-related cis elements, or the cis-elements recognized by the NAM-family of transcription factors (Non-Apical meristem). This work documents the use of computational and experimental approaches to analyse possible interactions between cis-elements involved in the regulation of ABA-responsive genes. The computational predictions allowed the distinction between putatively relevant combinations of cis-elements from irrelevant combinations of cis-elements in ABA-responsive genes. The comparison with experimental data allowed to identify certain cis-elements that have not been previously associated to the ABA-mediated transcriptional regulation, but that might be present in ABA-responsive genes (e.g. auxin responsive elements). Moreover, the efforts to unravel the gene regulatory network associated with the ABA-signalling pathway revealed that NAM-transcription factors and their corresponding binding sequences are important components of this network.
The advent of large-scale and high-throughput technologies has recently caused a shift in focus in contemporary biology from decades of reductionism towards a more systemic view. Alongside the availability of genome sequences the exploration of organisms utilizing such approach should give rise to a more comprehensive understanding of complex systems. Domestication and intensive breeding of crop plants has led to a parallel narrowing of their genetic basis. The potential to improve crops by conventional breeding using elite cultivars is therefore rather limited and molecular technologies, such as marker assisted selection (MAS) are currently being exploited to re-introduce allelic variance from wild species. Molecular breeding strategies have mostly focused on the introduction of yield or resistance related traits to date. However given that medical research has highlighted the importance of crop compositional quality in the human diet this research field is rapidly becoming more important. Chemical composition of biological tissues can be efficiently assessed by metabolite profiling techniques, which allow the multivariate detection of metabolites of a given biological sample. Here, a GC/MS metabolite profiling approach has been applied to investigate natural variation of tomatoes with respect to the chemical composition of their fruits. The establishment of a mass spectral and retention index (MSRI) library was a prerequisite for this work in order to establish a framework for the identification of metabolites from a complex mixture. As mass spectral and retention index information is highly important for the metabolomics community this library was made publicly available. Metabolite profiling of tomato wild species revealed large differences in the chemical composition, especially of amino and organic acids, as well as on the sugar composition and secondary metabolites. Intriguingly, the analysis of a set of S. pennellii introgression lines (IL) identified 889 quantitative trait loci of compositional quality and 326 yield-associated traits. These traits are characterized by increases/decreases not only of single metabolites but also of entire metabolic pathways, thus highlighting the potential of this approach in uncovering novel aspects of metabolic regulation. Finally the biosynthetic pathway of the phenylalanine-derived fruit volatiles phenylethanol and phenylacetaldehyde was elucidated via a combination of metabolic profiling of natural variation, stable isotope tracer experiments and reverse genetic experimentation.
We investigate the rotational and thermal properties of star-forming molecular clouds using hydrodynamic simulations. Stars form from molecular cloud cores by gravoturbulent fragmentation. Understanding the angular momentum and the thermal evolution of cloud cores thus plays a fundamental role in completing the theoretical picture of star formation. This is true not only for current star formation as observed in regions like the Orion nebula or the ρ-Ophiuchi molecular cloud but also for the formation of stars of the first or second generation in the universe. In this thesis we show how the angular momentum of prestellar and protostellar cores evolves and compare our results with observed quantities. The specific angular momentum of prestellar cores in our models agree remarkably well with observations of cloud cores. Some prestellar cores go into collapse to build up stars and stellar systems. The resulting protostellar objects have specific angular momenta that fall into the range of observed binaries. We find that collapse induced by gravoturbulent fragmentation is accompanied by a substantial loss of specific angular momentum. This eases the "angular momentum problem" in star formation even in the absence of magnetic fields. The distribution of stellar masses at birth (the initial mass function, IMF) is another aspect that any theory of star formation must explain. We focus on the influence of the thermodynamic properties of star-forming gas and address this issue by studying the effects of a piecewise polytropic equation of state on the formation of stellar clusters. We increase the polytropic exponent γ from a value below unity to a value above unity at a certain critical density. The change of the thermodynamic state at the critical density selects a characteristic mass scale for fragmentation, which we relate to the peak of the IMF observed in the solar neighborhood. Our investigation generally supports the idea that the distribution of stellar masses depends mainly on the thermodynamic state of the gas. A common assumption is that the chemical evolution of the star-forming gas can be decoupled from its dynamical evolution, with the former never affecting the latter. Although justified in some circumstances, this assumption is not true in every case. In particular, in low-metallicity gas the timescales for reaching the chemical equilibrium are comparable or larger than the dynamical timescales. In this thesis we take a first approach to combine a chemical network with a hydrodynamical code in order to study the influence of low levels of metal enrichment on the cooling and collapse of ionized gas in small protogalactic halos. Our initial conditions represent protogalaxies forming within a fossil HII region -- a previously ionized HII region which has not yet had time to cool and recombine. We show that in these regions, H2 is the dominant and most effective coolant, and that it is the amount of H2 formed that controls whether or not the gas can collapse and form stars. For metallicities Z <= 10<sup>-3 Zsun, metal line cooling alters the density and temperature evolution of the gas by less than 1% compared to the metal-free case at densities below 1 cm<sup>-3 and temperatures above 2000 K. We also find that an external ultraviolet background delays or suppresses the cooling and collapse of the gas regardless of whether it is metal-enriched or not. Finally, we study the dependence of this process on redshift and mass of the dark matter halo.
The occurrence of earthquakes is characterized by a high degree of spatiotemporal complexity. Although numerous patterns, e.g. fore- and aftershock sequences, are well-known, the underlying mechanisms are not observable and thus not understood. Because the recurrence times of large earthquakes are usually decades or centuries, the number of such events in corresponding data sets is too small to draw conclusions with reasonable statistical significance. Therefore, the present study combines both, numerical modeling and analysis of real data in order to unveil the relationships between physical mechanisms and observational quantities. The key hypothesis is the validity of the so-called "critical point concept" for earthquakes, which assumes large earthquakes to occur as phase transitions in a spatially extended many-particle system, similar to percolation models. New concepts are developed to detect critical states in simulated and in natural data sets. The results indicate that important features of seismicity like the frequency-size distribution and the temporal clustering of earthquakes depend on frictional and structural fault parameters. In particular, the degree of quenched spatial disorder (the "roughness") of a fault zone determines whether large earthquakes occur quasiperiodically or more clustered. This illustrates the power of numerical models in order to identify regions in parameter space, which are relevant for natural seismicity. The critical point concept is verified for both, synthetic and natural seismicity, in terms of a critical state which precedes a large earthquake: a gradual roughening of the (unobservable) stress field leads to a scale-free (observable) frequency-size distribution. Furthermore, the growth of the spatial correlation length and the acceleration of the seismic energy release prior to large events is found. The predictive power of these precursors is, however, limited. Instead of forecasting time, location, and magnitude of individual events, a contribution to a broad multiparameter approach is encouraging.
Stars are born in turbulent molecular clouds that fragment and collapse under the influence of their own gravity, forming a cluster of hundred or more stars. The star formation process is controlled by the interplay between supersonic turbulence and gravity. In this work, the properties of stellar clusters created by numerical simulations of gravoturbulent fragmentation are compared to those from observations. This includes the analysis of properties of individual protostars as well as statistical properties of the entire cluster. It is demonstrated that protostellar mass accretion is a highly dynamical and time-variant process. The peak accretion rate is reached shortly after the formation of the protostellar core. It is about one order of magnitude higher than the constant accretion rate predicted by the collapse of a classical singular isothermal sphere, in agreement with the observations. For a more reasonable comparison, the model accretion rates are converted to the observables bolometric temperature, bolometric luminosity, and envelope mass. The accretion rates from the simulations are used as input for an evolutionary scheme. The resulting distribution in the Tbol-Lbol-Menv parameter space is then compared to observational data by means of a 3D Kolmogorov-Smirnov test. The highest probability found that the distributions of model tracks and observational data points are drawn from the same population is 70%. The ratios of objects belonging to different evolutionary classes in observed star-forming clusters are compared to the temporal evolution of the gravoturbulent models in order to estimate the evolutionary stage of a cluster. While it is difficult to estimate absolute ages, the realtive numbers of young stars reveal the evolutionary status of a cluster with respect to other clusters. The sequence shows Serpens as the youngest and IC 348 as the most evolved of the investigated clusters. Finally the structures of young star clusters are investigated by applying different statistical methods like the normalised mean correlation length and the minimum spanning tree technique and by a newly defined measure for the cluster elongation. The clustering parameters of the model clusters correspond in many cases well to those from observed ones. The temporal evolution of the clustering parameters shows that the star cluster builds up from several subclusters and evolves to a more centrally concentrated cluster, while the cluster expands slower than new stars are formed.
We analyze the asymptotic behavior in the limit epsilon to zero for a wide class of difference operators H_epsilon = T_epsilon + V_epsilon with underlying multi-well potential. They act on the square summable functions on the lattice (epsilon Z)^d. We start showing the validity of an harmonic approximation and construct WKB-solutions at the wells. Then we construct a Finslerian distance d induced by H and show that short integral curves are geodesics and d gives the rate for the exponential decay of Dirichlet eigenfunctions. In terms of this distance, we give sharp estimates for the interaction between the wells and construct the interaction matrix.
Ultrathin, semi-permeable membranes are not only essential in natural systems (membranes of cells or organelles) but they are also important for applications (separation, filtering) in miniaturized devices. Membranes, integrated as diffusion barriers or filters in micron scale devices need to fulfill equivalent requirements as the natural systems, in particular mechanical stability and functionality (e.g. permeability), while being only tens of nm in thickness to allow fast diffusion times. Promising candidates for such membranes are polyelectrolyte multilayers, which were found to be mechanically stable, and variable in functionality. In this thesis two concepts to integrate such membranes in larger scale structures were developed. The first is based on the directed adhesion of polyelectrolyte hollow microcapsules. As a result, arrays of capsules were created. These can be useful for combinatorial chemistry or sensing. This concept was expanded to couple encapsulated living cells to the surface. The second concept is the transfer of flat freestanding multilayer membranes to structured surfaces. We have developed a method that allows us to couple mm2 areas of defect free film with thicknesses down to 50 nm to structured surfaces and to avoid crumpling of the membrane. We could again use this technique to produce arrays of micron size. The freestanding membrane is a diffusion barrier for high molecular weight molecules, while small molecules can pass through the membrane and thus allows us to sense solution properties. We have shown also that osmotic pressures lead to membrane deflection. That could be described quantitatively.
In semi-arid savannas, unsustainable land use can lead to degradation of entire landscapes, e.g. in the form of shrub encroachment. This leads to habitat loss and is assumed to reduce species diversity. In BIOTA phase 1, we investigated the effects of land use on population dynamics on farm scale. In phase 2 we scale up to consider the whole regional landscape consisting of a diverse mosaic of farms with different historic and present land use intensities. This mosaic creates a heterogeneous, dynamic pattern of structural diversity at a large spatial scale. Understanding how the region-wide dynamic land use pattern affects the abundance of animal and plant species requires the integration of processes on large as well as on small spatial scales. In our multidisciplinary approach, we integrate information from remote sensing, genetic and ecological field studies as well as small scale process models in a dynamic region-wide simulation tool. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Decisions for the conservation of biodiversity and sustainable management of natural resources are typically related to large scales, i.e. the landscape level. However, understanding and predicting the effects of land use and climate change on scales relevant for decision-making requires to include both, large scale vegetation dynamics and small scale processes, such as soil-plant interactions. Integrating the results of multiple BIOTA subprojects enabled us to include necessary data of soil science, botany, socio-economics and remote sensing into a high resolution, process-based and spatially-explicit model. Using an example from a sustainably-used research farm and a communally used and degraded farming area in semiarid southern Namibia we show the power of simulation models as a tool to integrate processes across disciplines and scales.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Fluvial systems are one of the major features shaping a landscape. They adjust to the prevailing tectonic and climatic setting and therefore are very sensitive markers of changes in these systems. If their response to tectonic and climatic forcing is quantified and if the climatic signal is excluded, it is possible to derive a local deformation history. Here, we investigate fluvial terraces and erosional surfaces in the southern Chilean forearc to assess a long-term geomorphic and hence tectonic evolution. Remote sensing and field studies of the Nahuelbuta Range show that the long-term deformation of the Chilean forearc is manifested by breaks in topography, sequences of differentially uplifted marine, alluvial and strath terraces as well as tectonically modified river courses and drainage basins. We used SRTM-90-data as basic elevation information for extracting and delineating drainage networks. We calculated hypsometric curves as an indicator for basin uplift, stream-length gradient indices to identify stream segments with anomalous slopes, and longitudinal river profiles as well as DS-plots to identify knickpoints and other anomalies. In addition, we investigated topography with elevation-slope graphs, profiles, and DEMs to reveal erosional surfaces. During the first field trip we already measured palaeoflow directions, performed pebble counting and sampled the fluvial terraces in order to apply cosmogenic nuclide dating (<sup>10Be, <sup>26Al) as well as provenance analyses. Our preliminary analysis of the Coastal Cordillera indicates a clear segmentation between the northern and southern parts of the Nahuelbuta Range. The Lanalhue Fault, a NW-SE striking fault zone oblique to the plate boundary, defines the segment boundary. Furthermore, we find a complex drainage re-organisation including a drainage reversal and wind gap on the divide between the Tirúa and Pellahuén basins east of the town Tirúa. The coastal basins lost most of their Andean sediment supply areas that existed in Tertiary and in part during early Pleistocene time. Between the Bío-Bío and Imperial rivers no Andean river is recently capable to traverse the Coastal Cordillera, suggesting ongoing Quaternary uplift of the entire range. From the spatial distribution of geomorphic surfaces in this region two uplift signals may be derived: (1) a long-term differential uplift process, active since the Miocene and possibly caused by underplating of subducted trench sediments, (2) a younger, local uplift affecting only the northern part of the Nahuelbuta Range that may be caused by the interaction of the forearc with the subduction of the Mocha Fracture Zone at the latitude of the Arauco peninsula. Our approach thus provides results in our attempt to decipher the characteristics of forearc development of active convergent margins using long-term geomorphic indicators. Furthermore, it is expected that our ongoing assessment will constrain repeatedly active zones of deformation. <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
The rigorous development, application and validation of distributed hydrological models obligates to evaluate data in a spatially distributed way. In particular, spatial model predictions such as the distribution of soil moisture, runoff generating areas or nutrient-contributing areas or erosion rates, are to be assessed against spatially distributed observations. Also model inputs, such as the distribution of modelling units derived by GIS and remote sensing analyses, should be evaluated against groundbased observations of landscape characteristics. So far, however, quantitative methods of spatial field comparison have rarely been used in hydrology. In this paper, we present algorithms that allow to compare observed and simulated spatial hydrological data. The methods can be applied for binary and categorical data on regular grids. They comprise cell-by-cell algorithms, cell-neighbourhood approaches that account for fuzziness of location, and multi-scale algorithms that evaluate the similarity of spatial fields with changing resolution. All methods provide a quantitative measure of the similarity of two maps. The comparison methods are applied in two mountainous catchments in southern Germany (Brugga, 40 km<sup>2) and Austria (Löhnersbach, 16 km<sup>2). As an example of binary hydrological data, the distribution of saturated areas is analyzed in both catchments. For categorical data, vegetation zones that are associated with different runoff generation mechanisms are analyzed in the Löhnersbach. Mapped spatial patterns are compared to simulated patterns from terrain index calculations and from satellite image analysis. It is discussed how particular features of visual similarity between the spatial fields are captured by the quantitative measures, leading to recommendations on suitable algorithms in the context of evaluating distributed hydrological models.
One of the most difficult issues when dealing with optical water remote-sensing is its acceptance as a useful application for environmental research. This problem is, on the one hand, concerned with the optical complexity and variability of the investigated natural media, and therefore the question arises as to the plausibility of the parameters derived from remote-sensing techniques. Detailed knowledge about the regional bio- and chemico-optical properties is required for such studies, however such information is seldom available for the sites of interest. On the other hand, the primary advantage of remote-sensing information, which is the provision of a spatial overview, may not be exploited fully by the disciplines that would benefit most from such information. It is often seen in a variety of disciplines that scientists have been primarily trained to look at discrete data sets, and therefore have no experience of incorporating information dealing with spatial heterogeneity. In this thesis, the opportunity was made available to assess the potential of Ocean Colour data to provide spatial and seasonal information about the surface waters of Lake Baikal (Siberia). While discrete limnological field data is available, the spatial extension of Lake Baikal is enormous (ca. 600 km), while the field data are limited to selected sites and expedition time windows. Therefore, this remote-sensing investigation aimed to support a multi-disciplinary limnological investigation within the framework of the paleoclimate EU-project ‘High Resolution CONTINENTal Paleoclimate Record in Lake Baikal, Siberia (CONTINENT)’ using spatial and seasonal information from the SeaWiFS satellite (NASA). From this, the SeaWiFS study evolved to become the first efficient bio-optical satellite study of Lake Baikal. During the course of three years, field work including spectral field measurements and water sampling, was carried out at Lake Baikal in Southern Siberia, and at the Mecklenburg and Brandenburg lake districts in Germany. The first step in processing the SeaWiFS satellite data involved adapting the SeaDAS (NASA) atmospheric-correction processing to match as close as possible the specific conditions of Lake Baikal. Next, various Chl-a algorithms were tested on the atmospherically-corrected optimized SeaWiFS data set (years 2001 to 2002), comparing the CONTINENT pigment ground-truth data with the Chl-a concentrations derived from the satellite data. This showed the high performance of the global Chl-a products OC2 and OC4 for the oligotrophic, transparent waters (bio-optical Case 1) of Lake Baikal. However, considerable Chl-a overestimation prevailed in bio-optical Case 2 areas for the case of discharge events. High-organic terrigenous input into Lake Baikal could be traced and information extracted using the SeaWiFS spectral data. Suspended Particulate Matter (SPM) was quantified by the regression of the SeaDAS attenuation coefficient as the optical parameter with SPM field data. Finally, the Chl-a and terrigenous input maps derived from the remote sensing data were used to assist with analyzing the relationships between the various discrete data obtained during the CONTINENT field work. Hence, plausible spatial and seasonal information describing autochthonous and allochthonous material in Lake Baikal could be provided by satellite data.Lake Baikal, with its bio-optical complexity and its different areas of Case 1 and Case 2 waters, is a very interesting case study for Ocean Colour analyses. Proposals for future Ocean Colour studies of Lake Baikal are discussed, including which bio-optical parameters for analytical models still need to be clarified by field investigations.
From its first use in the field of biochemistry, instrumental analysis offered a variety of invaluable tools for the comprehensive description of biological systems. Multi-selective methods that aim to cover as many endogenous compounds as possible in biological samples use different analytical platforms and include methods like gene expression profile and metabolite profile analysis. The enormous amount of data generated in application of profiling methods needs to be evaluated in a manner appropriate to the question under investigation. The new field of system biology rises to the challenge to develop strategies for collecting, processing, interpreting, and archiving this vast amount of data; to make those data available in form of databases, tools, models, and networks to the scientific community. On the background of this development a multi-selective method for the determination of phytohormones was developed and optimised, complementing the profile analyses which are already in use (Chapter I). The general feasibility of a simultaneous analysis of plant metabolites and phytohormones in one sample set-up was tested by studies on the analytical robustness of the metabolite profiling protocol. The recovery of plant metabolites proved to be satisfactory robust against variations in the extraction protocol by using common extraction procedures for phytohormones; a joint extraction of metabolites and hormones from plant tissue seems practicable (Chapter II). Quantification of compounds within the context of profiling methods requires particular scrutiny (Chapter II). In Chapter III, the potential of stable-isotope in vivo labelling as normalisation strategy for profiling data acquired with mass spectrometry is discussed. First promising results were obtained for a reproducible quantification by stable-isotope in vivo labelling, which was applied in metabolomic studies. In-parallel application of metabolite and phytohormone analysis to seedlings of the model plant Arabidopsis thaliana exposed to sulfate limitation was used to investigate the relationship between the endogenous concentration of signal elements and the ‘metabolic phenotype’ of a plant. An automated evaluation strategy was developed to process data of compounds with diverse physiological nature, such as signal elements, genes and metabolites – all which act in vivo in a conditional, time-resolved manner (Chapter IV). Final data analysis focussed on conditionality of signal-metabolome interactions.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Integration of digital elevation models and satellite images to investigate geological processes.
(2006)
In order to better understand the geological boundary conditions for ongoing or past surface processes geologists face two important questions: 1) How can we gain additional knowledge about geological processes by analyzing digital elevation models (DEM) and satellite images and 2) Do these efforts present a viable approach for more efficient research. Here, we will present case studies at a variety of scales and levels of resolution to illustrate how we can substantially complement and enhance classical geological approaches with remote sensing techniques. Commonly, satellite and DEM based studies are being used in a first step of assessing areas of geologic interest. While in the past the analysis of satellite imagery (e.g. Landsat TM) and aerial photographs was carried out to characterize the regional geologic characteristics, particularly structure and lithology, geologists have increasingly ventured into a process-oriented approach. This entails assessing structures and geomorphic features with a concept that includes active tectonics or tectonic activity on time scales relevant to humans. In addition, these efforts involve analyzing and quantifying the processes acting at the surface by integrating different remote sensing and topographic data (e.g. SRTM-DEM, SSM/I, GPS, Landsat 7 ETM, Aster, Ikonos…). A combined structural and geomorphic study in the hyperarid Atacama desert demonstrates the use of satellite and digital elevation data for assessing geological structures formed by long-term (millions of years) feedback mechanisms between erosion and crustal bending (Zeilinger et al., 2005). The medium-term change of landscapes during hundred thousands to millions years in a more humid setting is shown in an example from southern Chile. Based on an analysis of rivers/watersheds combined with landscapes parameterization by using digital elevation models, the geomorphic evolution and change in drainage pattern in the coastal Cordillera can be quantified and put into the context of seismotectonic segmentation of a tectonically active region. This has far-reaching implications for earthquake rupture scenarios and hazard mitigation (K. Rehak, see poster on IMAF Workshop). Two examples illustrate short-term processes on decadal, centennial and millennial time scales: One study uses orogen scale precipitation gradients derived from remotely sensed passive microwave data (Bookhagen et al., 2005a). They demonstrate how debris flows were triggered as a response of slopes to abnormally strong rainfall in the interior parts of the Himalaya during intensified monsoons. The area of the orogen that receives high amounts of precipitation during intensified monsoons also constitutes numerous landslide deposits of up to 1km<sup>3 volume that were generated during intensified monsoon phase at about 27 and 9 ka (Bookhagen et al., 2005b). Another project in the Swiss Alps compared sets of aerial photographs recorded in different years. By calculating high resolution surfaces the mass transport in a landslide could be reconstructed (M. Schwab, Universität Bern). All these examples, although representing only a short and limited selection of projects using remote sense data in geology, have as a common approach the goal to quantify geological processes. With increasing data resolution and new sensors future projects will even enable us to recognize more patterns and / or structures indicative of geological processes in tectonically active areas. This is crucial for the analysis of natural hazards like earthquakes, tsunamis and landslides, as well as those hazards that are related to climatic variability. The integration of remotely sensed data at different spatial and temporal scales with field observations becomes increasingly important. Many of presently highly populated places and increasingly utilized regions are subject to significant environmental pressure and often constitute areas of concentrated economic value. Combined remote sensing and ground-truthing in these regions is particularly important as geologic, seismicity and hydrologic data may be limited here due to the recency of infrastructural development. Monitoring ongoing processes and evaluating the remotely sensed data in terms of recurrence of events will greatly enhance our ability to assess and mitigate natural hazards. <hr> Dokument 1: Foliensatz | Dokument 2: Abstract <hr> Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
This thesis studies strong, completely charged polyelectrolyte brushes. Extensive molecular dynamics simulations are performed on different polyelectrolyte brush systems using local compute servers and massively parallel supercomputers. The full Coulomb interaction of charged monomers, counterions, and salt ions is treated explicitly. The polymer chains are anchored by one of their ends to a uncharged planar surface. The chains are treated under good solvent conditions. Monovalent salt ions (1:1 type) are modelled same as counterions. The studies concentrate on three different brush systems at constant temperature and moderate Coulomb interaction strength (Bjerrum length equal to bond length): The first system consists of a single polyelectrolyte brush anchored with varying grafting density to a plane. Results show that chains are extended up to about 2/3 of their contour length. The brush thickness slightly grows with increasing anchoring density. This slight dependence of the brush height on grafting density is in contrast to the well known scaling result for the osmotic brush regime. That is why the result obtained by simulations has stimulated further development of theory as well as new experimental investigations on polyelectrolyte brushes. This observation can be understood on a semi-quantitative level using a simple scaling model that incorporates excluded volume effects in a free-volume formulation where an effective cross section is assigned to the polymer chain from where couterions are excluded. The resulting regime is called nonlinear osmotic brush regime. Recently this regime was also obtained in experiments. The second system studied consists of polyelectrolyte brushes with added salt in the nonlinear osmotic regime. Varying salt is an important parameter to tune the structure and properties of polyelectrolytes. Further motivation is due to a theoretical scaling prediction by Pincus for the salt dependence of brush thickness. In the high salt limit (salt concentration much larger than counterion concentration) the brush height is predicted to decrease with increasing external salt, but with a relatively weak power law showing an exponent -1/3. There is some experimental and theoretical work that confirms this prediction, but there are other results that are in contradiction. In such a situation simulations are performed to validate the theoretical prediction. The simulation result shows that brush thickness decreases with added salt, and indeed is in quite good agreement with the scaling prediction by Pincus. The relation between buffer concentration and the effective ion strength inside the brush at varying salt concentration is of interest both from theoretical as well as experimental point of view. The simulation result shows that mobile ions (counterions as well as salt) distribute nonhomogeneously inside and outside of the brush. To explain the relation between the internal ion concentration with the buffer concentration a Donnan equilibrium approach is employed. Modifying the Donnan approach by taking into account the self-volume of polyelectrolyte chains as indicated above, the simulation result can be explained using the same effective cross section for the polymer chains. The extended Donnan equilibrium relation represents a interesting theoretical prediction that should be checked by experimental data. The third system consist of two interacting polyelectrolyte brushes that are grafted to two parallel surfaces. The interactions between brushes are important, for instance, in stabilization of dispersions against flocculation. In the simulations pressure is evaluated as a function of separation D between the two grafting planes. The pressure behavior shows different regimes for decreasing separation. This behavior is in qualitative agreement with experimental data. At relatively weak compression the pressure behavior obtained in the simulation agrees with a 1/D power law predicted by scaling theory. Beyond that the present study could supply new insight for understanding the interaction between polyelectrolyte brushes.
The major aim of this work was the identification of new phloem sap proteins and a metabolic characterisation of this transport fluid. The experiments were performed on the three plant species C. sativus, C. maxima and B. napus. To characterise the phloem samples from B. napus, a new model plant for phloem analysis, western blot tests together with metabolite profiling were performed. GC-MS metabolite profiling and enzyme assays were used for measuring metabolites in the phloem of B. napus. Results from the phloem sap measurements showed, as expected, a typical sugar distribution for apoplasmic phloem loaders with sucrose being the predominant sugar. In stem extracts, the most abundant sugar was glucose with much lower fructose and sucrose levels. With the GC-MS approach it was possible to identify a number of metabolites which showed a differential distribution when phloem and stem tissue extracts were compared. For protein identification, two different approaches were employed (i) screening expression libraries with total phloem protein specific antisera and (ii) protein separation on 2 DE gels followed by ESI-MS/MS sequence analyses. For the first approach, three different phloem protein-specific antisera were produced and expression libraries were constructed. Phloem protein antisera were tested for specificity and some attempts to estimate specific epitopes were undertaken. Screening of the libraries resulted in the identification of 14 different proteins from all investigated species. Analyses of B. napus phloem sap proteins from 2 DE with ESI-MS/MS resulted in the identification of 5 different proteins. The phloem localisation of the identified proteins was additionally confirmed by western blot tests using specific antibodies. In order to functionally characterise some selected phloem proteins from B. napus, the group of potential calcium-binding polypeptides was analysed for functional Ca<sup>+2 binding properties and several Ca<sup>+2–binding proteins could be isolated. However, their sequences could as yet not be determined. Another approach used for functional protein characterisation was the analysis of Arabidopsis T-DNA insertion mutants. Four available mutants with insertions in phloem protein-specific genes were chosen from the SALK and GABI-Kat collections and selected homozygous lines were tested for the presence of the investigated proteins. In order to verify if the product of one of the mutated gene (GRP 7) is transported through the phloem, grafting experiments were performed followed by western blot analyses. Although the employed antiserum against GRP 7 protein did not allow distinguishing between the mutant and the wild type plants, successful Arabidopsis grafting could be established as a promising method for further studies on protein translocation through the phloem.
Investigation of tropospheric arctic aerosol and mixed-phase clouds using airborne lidar technique
(2005)
An Airborne Mobile Aerosol Lidar (AMALi) was constructed and built at Alfred-Wegener-Institute for Polar and Marine Research (AWI) in Potsdam, Germany for the lower tropospheric aerosol and cloud research under tough arctic conditions. The system was successfully used during two AWI airborne field campaigns, ASTAR 2004 and SVALEX 2005, performed in vicinity of Spitsbergen in the Arctic. The novel evaluation schemes, the Two-Stream Inversion and the Iterative Airborne Inversion, were applied to the obtained lidar data. Thereby, calculation of the particle extinction and backscatter coefficient profiles with corresponding lidar ratio profiles characteristic for the arctic air was possible. The comparison of these lidar results with the results of other in-situ and remote instrumentation (ground based Koldewey Aerosol Raman Lidar (KARL), sunphotometer, radiosounding, satellite imagery) allowed to provided clean contra polluted (Arctic Haze) characteristics of the arctic aerosols. Moreover, the data interpretation by means of the ECMWF Operational Analyses and small-scale dispersion model EULAG allowed studying the effects of the Spitsbergens orography on the aerosol load in the Planetary Boundary Layer. With respect to the cloud studies a new methodology of alternated remote AMALi measurements with the airborne in-situ cloud optical and microphysical parameters measurements was proved feasible for the low density mixed-phase cloud studies. An example of such approach during observation of the natural cloud seeding (feeder-seeder phenomenon) with ice crystals precipitating into the lower supercooled stratocumulus deck were discussed in terms of the lidar signal intensity profiles and corresponding depolarisation ratio profiles. For parts of the cloud system characterised by almost negligible multiple scattering the calculation of the particle backscatter coefficient profiles was possible using the lidar ratio information obtained from the in-situ measurements in ice-crystal cloud and water cloud.
This paper argues that the texts surviving from the Old English period do not reflect the spoken language of the bulk of the population under Anglo-Saxon elite domination. While the Old English written documents suggest that the language was kept remarkably unchanged, i.e. was strongly monitored during the long OE period (some 500 years!), the spoken and "real Old English" is likely to have been very different and much more of the type of Middle English than the written texts. "Real Old Engish", i.e. of course only appeared in writing after the Norman Conquest. Middle English is therefore claimed to have begun with the 'late British' speaking shifters to Old English. The shift patterns must have differed in the various part of the island of Britain, as the shifters became exposed to further language contact with the Old Norse adstrate in the Danelaw areas and the Norman superstrate particularly in the South East, the South West having been least exposed to language contact after the original shift from 'Late British' to Old English. This explains why the North was historically the most innovative zone. This also explains the conservatism of the present day dialects in the South West. It is high time that historical linguists acknowledge the arcane character of the Old English written texts.
In recent years, the aim of supramolecular syntheses is not only the creation of particular structures but also the introduction of specific functions in these supramolecules. The present work describes the use of the ionic self-assembly (ISA) route to generate nanostructured materials with integrated functionality. Since the ISA strategy has proved to be a facile method for the production of liquid-crystalline materials, we investigated the phase behaviour, physical properties and function of a variety of ISA materials comprising a perylene derivative as the employed oligoelectrolyte. Functionality was introduced into the materials through the use of functional surfactants. In order to meet the requirements to produce functional ISA materials through the use of functional surfactants, we designed and synthesized pyrrole-derived monomers as surfactant building blocks. Owing to the presence of the pyrrole moiety, these surfactants are not only polymerizable but are also potentially conductive when polymerized. We adopted single-tailed and double-tailed N-substituted pyrrole monomers as target molecules. Since routine characterization analysis of the double-tailed pyrrole-containing surfactant indicated very interesting, complex phase behaviour, a comprehensive investigation of its interfacial properties and mesophase behavior was conducted. The synthesized pyrrole-derived surfactants were then employed in the synthesis of ISA complexes. The self-assembled materials were characterized and subsequently polymerized by both chemical and electrochemical methods. The changes in the structure and properties of the materials caused by the in-situ polymerization were addressed. In the second part of this work, the motif investigated was a property rather than a function. Since chiral superstructures have obtained much attention during the last few years, we investigated the possibility of chiral ISA materials through the use of chiral surfactants. Thus, the work involved synthesis of novel chiral surfactants and their incorporation in ISA materials with the aim of obtaining ionically self-assembled chiral superstructures. The results and insights presented here suggest that the presented synthesis strategy can be easily extended to incorporate any kind of charged tectonic unit with desired optical, electrical, or magnetic properties into supramolecular assemblies for practical applications.
We consider a system of infinitely many hard balls in R<sup>d undergoing Brownian motions and submitted to a smooth pair potential. It is modelized by an infinite-dimensional stochastic differential equation with a local time term. We prove that the set of all equilibrium measures, solution of a detailed balance equation, coincides with the set of canonical Gibbs measures associated to the hard core potential added to the smooth interaction potential.
Neolignans, dehydrodimers of phenylpropenes, are natural products that exhibit different biological activities. 8,5’-Neolignans containing a trans- dihydrobenzofuran skeleton are the most abundant neolignans in nature. The published syntheses of trans-dihydrobenzofurans are multistep procedures that are time consuming and provide the product in low yield. Furthermore, all dimerisation reactions either in the presence of enzymes or mediated by metal salts are yielding dimers consisting of two units of the same phenylpropene compound, narrowing substantially the substitution pattern. Two different general synthetic approaches were examined. The first strategy was the enantioselective deprotonation at the α-carbon of the ο-alkyl phenols in the presence of a chiral diamine and sBuLi. Synthesis of several new phosphorous-based directed ortho-metalation groups was studied. The examined compounds having these new groups decomposed even under very mild reaction conditions and are not suitable for the application in the synthesis. The second strategy was to examine one [3+2] cycloaddition reaction, transition metal catalysed Heck oxyarylation reaction, in the synthetic approach to compounds having trans-dihydrobenzofuran skeleton. Palladium catalysed Heck oxyarylation reaction with halogenophenols or ortho-diazonium phenols as the starting material allowed the trans-dihydrobenzofuran compounds as the major products in acceptable yield and in one step. The products were formed under ligand free condition, as well as in the presence of some strong coordinating ligands (Ph3P). The experiments with several chiral ligands, showed that the obtained trans-dihydrobenzofurans were racemic mixtures. This result suggests formation of an achiral intermediate along the reaction pathway, which causes the lack of stereoselectivity in the products. Initially formed trans-dihydrobenzofuran compounds are the key precursors of many naturally occurring neolignans, and can be easily converted to 8,5’-neolignan derivatives.
Connective ties in discourse: Three ERP studies on causal, temporal and concessive connective ties and their influence on language processing. Questions In four experiments the influence of lexical connectives such as " darum", therefore, " danach", afterwards, and " trotzdem", nevertheless, on the processing of short two-sentence discourses was examined and compared to the processing of deictical sentential adverbs such as " gestern", yesterday, and " lieber", rather. These latter words do not have the property of signaling a certain discourse relation between two sentences, as connective ties do. Three questions were central to the work: * Do the processing contrasts found between connective and non-connective elements extend to connective ties and deictical sentential adverbs (experiments 2 and 3)? * Does the semantic content of the connective ties play the primary role, i.e is the major distinction to be made indeed between connective and non-connective or instead between causal, temporal and concessive? * When precisely is the information provided by connective ties used? There is some evidence that connective ties can have an immediate influence on the integration of subsequent elements, but the end of the second sentences appears to play an important role as well: experiments 2, 3, and 4. Conclusions First of all, the theoretical distinction between connective and non-connective elements does indeed have " cognitive reality" . This has already been shown in previous studies. The present studies do however show, that there is also a difference between one-place discourse elements (deictical sentential adverbs) and two-place discourse elements, namely connective ties, since all experiments examining this contrast found evidence for qualitatively and quantitatively different processing (experiments 1, 2, and 3). Secondly, the semantic type of the connective ties also plays a role. This was not shown for the LAN, found for all connective ties when compared to non-connective elements, and consequently interpreted as a more abstract reflection of the integration of connective ties. There was also no difference between causal and temporal connective ties before the end of the discourses in experiment 3. However, the N400 found for incoherent discourses in experiment 2, larger for connective incoherent than non-connective incoherent discourses, as well as the P3b found for concessive connective ties in the comparison between causal and concessive connective ties gave reason to assume that the semantic content of connective ties is made use of in incremental processing, and that the relation signaled by the connective tie is the one that readers attempt to construct. Concerning when the information provided by connective ties is used, it appears as if connectivity is generally and obligatorily taken at face value. As long as the meaning of a connective tie did not conflict with a preferred canonical discourse relation, there were no differences found for varying connective discourses (experiment 3). However, the fact that concessive connective ties announce the need for a more complex text representation was recognized and made use of immediately (experiment 4). Additionally, a violation of the discourse relation resulted in more difficult semantic integration if a connective tie was present (experiment 2). It is therefore concluded here that connective ties influence processing immediately. This claim has to be modified somewhat, since the sentence-final elements suggested that connective ties trigger different integration processes than non-connective elements. It seems as if the answer to the question of when connective ties are processed is neither exclusively immediately nor exclusively afterwards, but that both viewpoints are correct. It is suggested here that before the end of a discourse economy plays a central role in that a canonical relation is assumed unless there is evidence to the contrary. A connective tie could have the function of reducing the dimensions evaluated in a discourse to the one signaled by the connective tie. At the end of the discourse the representation is evaluated and verified, and an integrated situation model constructed. Here, the complexity of the different discourse relations that connective ties can signal, is expressed.
This thesis aimed to investigate several fundamental and perplexing questions relating to the phloem loading and transport mechanisms of Cucurbita maxima, by combining metabolomic analysis with cell biological techniques. This putative symplastic loading species has long been used for experiments on phloem anatomy, phloem biochemistry, phloem transport physiology and phloem signalling. Symplastic loading species have been proposed to use a polymer trapping mechanism to accumulate RFO (raffinose family oligosaccharides) sugars to build up high osmotic pressure in minor veins which sustains a concentration gradient that drives mass flow. However, extensive evidence indicating a low sugar concentration in their phloem exudates is a long-known problem that conflicts with this hypothesis. Previous metabolomic analysis shows the concentration of many small molecules in phloem exudates is higher than that of leaf tissues, which indicates an active apoplastic loading step. Therefore, in the view of the phloem metabolome, a symplastic loading mechanism cannot explain how small molecules other than RFO sugars are loaded into phloem. Most studies of phloem physiology using cucurbits have neglected the possible functions of vascular architecture in phloem transport. It is well known that there are two phloem systems in cucurbits with distinctly different anatomical features: central phloem and extrafascicular phloem. However, mistaken conclusions on sources of cucurbit phloem exudation from previous reports have hindered consideration of the idea that there may be important differences between these two phloem systems. The major results are summarized as below: 1) O-linked glycans in C.maxima were structurally identified as beta-1,3 linked glucose polymers, and the composition of glycans in cucurbits was found to be species-specific. Inter-species grafting experiments proved that these glycans are phloem mobile and transported uni-directionally from scion to stock. 2) As indicated by stable isotopic labelling experiments, a considerable amount of carbon is incorporated into small metabolites in phloem exudates. However, the incorporation of carbon into RFO sugars is much faster than for other metabolites. 3) Both CO2 labelling experiments and comparative metabolomic analysis of phloem exudates and leaf tissues indicated that metabolic processes other than RFO sugar metabolism play an important role in cucurbit phloem physiology. 4) The underlying assumption that the central phloem of cucurbits continuously releases exudates after physical incision was proved wrong by rigorous experiments including direct observation by normal microscopy and combined multiple-microscopic methods. Errors in previous experimental confirmation of phloem exudation in cucurbits are critically discussed. 5) Extrafascicular phloem was proved to be functional, as indicated by phloem-mobile carboxyfluorescein tracer studies. Commissural sieve tubes interconnect phloem bundles into a complete super-symplastic network. 6) Extrafascicular phloem represents the main source of exudates following physical incision. The major transported metabolites by these extrafacicular phloem are non-sugar compounds including amino acids, O-glycans, amines. 7) Central phloem contains almost exclusively RFO sugars, the estimated amount of which is up to 1 to 2 molar. The major RFO sugar present in central phloem is stachyose. 8) Cucurbits utilize two structurally different phloem systems for transporting different group of metabolites (RFO sugars and non-RFO sugar compounds). This implies that cucurbits may use spatially separated loading mechanisms (apoplastic loading for extrafascicular phloem and symplastic loading for central phloem) for supply of nutrients to sinks. 9) Along the transport systems, RFO sugars were mainly distributed within central phloem tissues. There were only small amounts of RFO sugars present in xylem tissues (millimolar range) and trace amounts of RFO sugars in cortex and pith. The composition of small molecules in external central phloem is very different from that in internal central phloem. 10) Aggregated P-proteins were manually dissected from central phloem and analysed by both SDS-PAGE and mass spectrometry. Partial sequences of peptides were obtained by QTOF de novo sequencing from trypsin digests of three SDS-PAGE bands. None of these partial sequences shows significant homology to known cucurbit phloem proteins or other plant proteins. This proves that these central phloem proteins are a completely new group of proteins different from those in extrafascicular phloem. The extensively analysed P-proteins reported in literature to date are therefore now shown to arise from extrafascicular phloem and not central phloem, and therefore do not appear to be involved in the occlusion processes in central phloem.
We give a necessary and sufficient condition for the existence of an increasing coupling of N (N >= 2) synchronous dynamics on S-Zd (PCA). Increasing means the coupling preserves stochastic ordering. We first present our main construction theorem in the case where S is totally ordered; applications to attractive PCAs are given. When S is only partially ordered, we show on two examples that a coupling of more than two synchronous dynamics may not exist. We also prove an extension of our main result for a particular class of partially ordered spaces.
Ergodicity of PCA
(2004)
For a general attractive Probabilistic Cellular Automata on S-Zd, we prove that the (time-) convergence towards equilibrium of this Markovian parallel dynamics, exponentially fast in the uniform norm, is equivalent to a condition (A). This condition means the exponential decay of the influence from the boundary for the invariant measures of the system restricted to finite boxes. For a class of reversible PCA dynamics on {1,+1}(Zd), wit a naturally associated Gibbsian potential rho, we prove that a (spatial-) weak mixing condition (WM) for rho implies the validity of the assumption (A); thus exponential (time-) ergodicity of these dynamics towards the unique Gibbs measure associated to rho hods. On some particular examples we state that exponential ergodicity holds as soon as there is no phase transition.
In this paper, we consider families of time Markov fields (or reciprocal classes) which have the same bridges as a Brownian diffusion. We characterize each class as the set of solutions of an integration by parts formula on the space of continuous paths C[0; 1]; R-d) Our techniques provide a characterization of gradient diffusions by a duality formula and, in case of reversibility, a generalization of a result of Kolmogorov.
We develop a cluster expansion in space-time for an infinite-dimensional system of interacting diffusions where the drift term of each diffusion depends on the whole past of the trajectory; these interacting diffusions arise when considering the Langevin dynamics of a ferromagnetic system submitted to a disordered external magnetic field.
The authors analyse different Gibbsian properties of interactive Brownian diffusions X indexed by the d-dimensional lattice. In the first part of the paper, these processes are characterized as Gibbs states on path spaces. In the second part of the paper, they study the Gibbsian character on R^{Z^d} of the law at time t of the infinite-dimensional diffusion X(t), when the initial law is Gibbsian. AMS Classifications: 60G15 , 60G60 , 60H10 , 60J60
We prove in this paper an existence result for infinite-dimensional stationary interactive Brownian diffusions. The interaction is supposed to be small in the norm ||.||∞ but otherwise is very general, being possibly non-regular and non-Markovian. Our method consists in using the characterization of such diffusions as space-time Gibbs fields so that we construct them by space-time cluster expansions in the small coupling parameter.
Time series analysis
(2004)
The thesis assesses the contribution of technology option of Carbon Capture and Sequestration (CCS) to climate change mitigation. CCS means that CO2 is captured at large industrial facilities and sequestered in goelogical structures. The technology uses the endogenous growth model MIND. Herein the various climate change mitigation options of reducing economic growth, increasing energy efficiency, changing the energy mix and CCS are assessed simultaneously. An important question is whether CCS is a temporary or long-term solution. The results show that in the middle of the 21st century CCS has its peak contribution, which allows prolonged use of relatively cheap fossil energy carriers. However, this leads to delayed introduction of renewable energy carriers. The technology path ways are accombined with different costs of climate change mitigation. The use of CCS delays and reduces the costs of climate change mitigation. However, the delayed introduction of renewable energy carriers leads to reduced technological learning, which induces higher costs in the longer term. All in all the temporary use of CCS reduces the costs of climate change mitigation costs. The result is robust, which is tested with various uncertainty analysis.
Adsorption layers of soluble surfactants enable and govern a variety of phenomena in surface and colloidal sciences, such as foams. The ability of a surfactant solution to form wet foam lamellae is governed by the surface dilatational rheology. Only systems having a non-vanishing imaginary part in their surface dilatational modulus, E, are able to form wet foams. The aim of this thesis is to illuminate the dissipative processes that give rise to the imaginary part of the modulus. There are two controversial models discussed in the literature. The reorientation model assumes that the surfactants adsorb in two distinct states, differing in their orientation. This model is able to describe the frequency dependence of the modulus E. However, it assumes reorientation dynamics in the millisecond time regime. In order to assess this model, we designed a SHG pump-probe experiment that addresses the orientation dynamics. Results obtained reveal that the orientation dynamics occur in the picosecond time regime, being in strong contradiction with the two states model. The second model regards the interface as an interphase. The adsorption layer consists of a topmost monolayer and an adjacent sublayer. The dissipative process is due to the molecular exchange between both layers. The assessment of this model required the design of an experiment that discriminates between the surface compositional term and the sublayer contribution. Such an experiment has been successfully designed and results on elastic and viscoelastic surfactant provided evidence for the correctness of the model. Because of its inherent surface specificity, surface SHG is a powerful analytical tool that can be used to gain information on molecular dynamics and reorganization of soluble surfactants. They are central elements of both experiments. However, they impose several structural elements of the model system. During the course of this thesis, a proper model system has been identified and characterized. The combination of several linear and nonlinear optical techniques, allowed for a detailed picture of the interfacial architecture of these surfactants.
This study is analysing the transformation of Slovak administration in the telecommunication sector between 1989 and 2004. The dynamic telecom sector forms a good example for the transition problems of post-socialist administration with special regard to the regulation regime change. After describing briefly the role of the telecom sector within economy, the Slovak sectoral policy is analysed. The focus is layed on telecom legislation (including the regulation framework), liberalization of the telecom market and privatisation of the former state owned telecom operator. The transformation of the organizational structure of the "Slovak telecommunication administration" is analysed in particular at the level of the ministry and the regulating agency.
Existing theoretical literature fails to explain satisfactorily the differences between the pay of workers that are covered by collective agreements and others who are not. This study aims at providing a model framework which is amenable for an analysis of this issue. Our general-equilibrium approach integrates a dual labor market and a two-sector product market. The results suggest that the so-called 'union wage gap' is largely determined by the degree of centralization of the bargains, and, to a somewhat lesser extent, by the expenditure share of the unionized sector's goods.
Post-translational redox-regulation is a well-known mechanism to regulate enzymes of the Calvin cycle, oxidative pentose phosphate cycle, NADPH export and ATP synthesis in response to light. The aim of the present thesis was to investigate whether a similar mechanism is also regulating carbon storage in leaves. Previous studies have shown that the key-regulatory enzyme of starch synthesis, ADPglucose pyrophosphorylase (AGPase) is inactivated by formation of an intermolecular disulfide bridge between the two catalytic subunits (AGPB) of the heterotetrameric holoenzyme in potato tubers, but the relevance of this mechanism to regulate starch synthesis in leaves was not investigated. The work presented in this thesis shows that AGPase is subject to post-translational redox-regulation in leaves of pea, potato and Arabidopsis in response to day night changes. Light was shown to trigger posttranslational redox-regulation of AGPase. AGPB was rapidly converted from a dimer to a monomer when isolated pea chloroplasts were illuminated and from a monomer to a dimer when preilluminated leaves were darkened. Conversion of AGPB from dimer to monomer was accompanied by an increase in activity due to changes in the kinetik properties of the enzyme. Studies with pea chloroplast extracts showed that AGPase redox-activation is mediated by thioredoxins f and m from spinach in-vitro. In a further set of experiments it was shown that sugars provide a second input leading to AGPase redox activation and increased starch synthesis and that they can act as a signal which is independent from light. External feeding of sugars such as sucrose or trehalose to Arabidopsis leaves in the dark led to conversion of AGPB from dimer to monomer and to an increase in the rate of starch synthesis, while there were no significant changes in the level of 3PGA, an allosteric activator of the enyzme, and in the NADPH/NADP+ ratio. Experiments with transgenic Arabidopsis plants with altered levels of trehalose 6-phosphate (T6P), the precursor of trehalose synthesis, provided genetic evidence that T6P rather than trehalose is leading to AGPase redox-activation. Compared to Wt, leaves expressing E.coli trehalose-phosphate synthase (TPS) in the cytosol showed increased activation of AGPase and higher starch level during the day, while trehalose-phosphate phosphatase (TPP) overexpressing leaves showed the opposite. These changes occurred independently of changes in sugar and sugar-phosphate levels and NADPH/NADP+ ratio. External supply of sucrose to Wt and TPS-overexpressing leaves led to monomerisation of AGPB, while this response was attenuated in TPP expressing leaves, indicating that T6P is involved in the sucrose-dependent redox-activation of AGPase. To provide biochemical evidence that T6P promotes redox-activation of AGPase independently of cytosolic elements, T6P was fed to intact isolated chloroplasts for 15 min. incubation with concentrations down to 100 µM of T6P, but not with sucrose 6-phosphate, sucrose, trehalose or Pi as controls, significantly and specifically increased AGPB monomerisation and AGPase activity within 15 minutes, implying T6P as a signal reporting the cytosolic sugar status to the chloroplast. The response to T6P did not involve changes in the NADPH/NADP+ ratio consistent with T6P modulating redox-transfer to AGPase independently of changes in plastidial redox-state. Acetyl-CoA carboxylase (ACCase) is known as key-regulatory enzyme of fatty acid and lipid synthesis in plants. At the start of the present thesis there was mainly in vitro evidence in the literature showing redox-regulation of ACCase by DTT, and thioredoxins f and m. In the present thesis the in-vivo relevance of this mechanism to regulate lipid synthesis in leaves was investigated. ACCase activity measurement in leaf tissue collected at the end of the day and night in Arabidopsis leaves revealed a 3-fold higher activation state of the enzyme in the light than in the dark. Redox-activation was accompanied by change in kinetic properties of ACCase, leading to an increase affinity to its substrate acetyl-CoA . In further experiments, DTT as well as sucrose were fed to leaves, and both treatments led to a stimulation in the rate of lipid synthesis accompanied by redox-activation of ACCase and decrease in acetyl-CoA content. In a final approach, comparison of metabolic and transcript profiling after DTT feeding and after sucrose feeding to leaves provided evidence that redox-modification is an important regulatory mechanism in central metabolic pathways such as TCA cycle and amino acid synthesis, which acts independently of transcript levels.