Refine
Has Fulltext
- yes (178) (remove)
Year of publication
- 2012 (178) (remove)
Document Type
- Doctoral Thesis (66)
- Postprint (39)
- Article (28)
- Preprint (27)
- Monograph/Edited Volume (12)
- Master's Thesis (2)
- Working Paper (2)
- Habilitation Thesis (1)
- Other (1)
Language
- English (178) (remove)
Keywords
- Curriculum Framework (16)
- European values education (16)
- Europäische Werteerziehung (16)
- Lehrevaluation (16)
- Studierendenaustausch (16)
- Unterrichtseinheiten (16)
- curriculum framework (16)
- lesson evaluation (16)
- student exchange (16)
- teaching units (16)
Institute
- Institut für Mathematik (30)
- Mathematisch-Naturwissenschaftliche Fakultät (22)
- Institut für Physik und Astronomie (19)
- Institut für Umweltwissenschaften und Geographie (16)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (12)
- Humanwissenschaftliche Fakultät (11)
- Institut für Biochemie und Biologie (11)
- Institut für Geowissenschaften (11)
- Institut für Chemie (10)
- Extern (8)
The closer the better
(2012)
A growing literature has suggested that processing of visual information presented near the hands is facilitated. In this study, we investigated whether the near-hands superiority effect also occurs with the hands moving. In two experiments, participants performed a cyclical bimanual movement task requiring concurrent visual identification of briefly presented letters. For both the static and dynamic hand conditions, the results showed improved letter recognition performance with the hands closer to the stimuli. The finding that the encoding advantage for near-hand stimuli also occurred with the hands moving suggests that the effect is regulated in real time, in accordance with the concept of a bimodal neural system that dynamically updates hand position in external space.
Education in knowledge society is challenged with a lot of problems in particular the interaction between the teacher and learner in social networking software as a key factor affects the learners’ learning and satisfaction (Prammanee, 2005) where “to teach is to communicate, to communicate is to interact, to interact is to learn” (Hefzallah, 2004, p. 48). Analyzing the relation between teacher-learner interaction from a side and learning outcome and learners’ satisfaction from the other side, some basic problems regarding a new learning culture using social networking software are discussed. Most of the educational institutions pay a lot of attentions to the equipments and emerging Information and Communication Technologies (ICTs) in learning situations. They try to incorporate ICT into their institutions as teaching and learning environments. They do this because they expect that by doing so they will improve the outcome of the learning process. Despite this, the learning outcome as reported in most studies is very limited, because the expectations of self-directed learning are much higher than the reality. Findings from an empirical study (investigating the role of teacher-learner interaction through new digital media wiki in higher education and learning outcome and learner’s satisfaction) are presented recommendations about the necessity of pedagogical interactions in support of teaching and learning activities in wiki courses in order to improve the learning outcome. Conclusions show the necessity for significant changes in the approach of vocational teacher training programs of online teachers in order to meet the requirements of new digital media in coherence with a new learning culture. These changes have to address collaborative instead of individual learning and ICT wiki as a tool for knowledge construction instead of a tool for gathering information.
Deepening Understanding
(2012)
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
We develop the method of Fischer-Riesz equations for general boundary value problems elliptic in the sense of Douglis-Nirenberg. To this end we reduce them to a boundary problem for a (possibly overdetermined) first order system whose classical symbol has a left inverse. For such a problem there is a uniquely determined boundary value problem which is adjoint to the given one with respect to the Green formula. On using a well elaborated theory of approximation by solutions of the adjoint problem, we find the Cauchy data of solutions of our problem.
Assignments, curriculum framework and background information as the base of developing lessons
(2012)
1. What are the general strengths of the assignments? 2. Structure of the assignment 3. Resources of the assignment 4. Fostering self-expression 5. How could you improve the assignment? 6. Lack of specific examples 7. Not relating the issue to the students 8. Language Problems 9. Infeasibility to adaptation 10. In what ways was the additional information useful ? How could this be improved? 11. Was the framework useful for you and in what way? 12. In what ways did the assignments reflect the steps identified in the framework?
The Indian summer monsoon (ISM) is one of the largest climate systems on earth and impacts the livelihood of nearly 40% of the world’s population. Despite dedicated efforts, a comprehensive picture of monsoon variability has proved elusive largely due to the absence of long term high resolution records, spatial inhomogeneity of the monsoon precipitation, and the complex forcing mechanisms (solar insolation, internal teleconnections for e.g., El Niño-Southern Oscillation, tropical-midlatitude interactions). My work aims to improve the understanding of monsoon variability through generation of long term high resolution palaeoclimate data from climatically sensitive regions in the ISM and westerlies domain. To achieve this aim I have (i) identified proxies (sedimentological, geochemical, isotopic, and mineralogical) that are sensitive to environmental changes; (ii) used the identified proxies to generate long term palaeoclimate data from two climatically sensitive regions, one in NW Himalayas (transitional westerlies and ISM domain in the Spiti valley and one in the core monsoon zone (Lonar lake) in central India); (iii) undertaken a regional overview to generate “snapshots” of selected time slices; and (iv) interpreted the spatial precipitation anomalies in terms of those caused by modern teleconnections. This approach must be considered only as the first step towards identifying the past teleconnections as the boundary conditions in the past were significantly different from today and would have impacted the precipitation anomalies. As the Spiti valley is located in the in the active tectonic orogen of Himalayas, it was essential to understand the role of regional tectonics to make valid interpretations of catchment erosion and detrital influx into the lake. My approach of using integrated structural/morphometric and geomorphic signatures provided clear evidence for active tectonics in this area and demonstrated the suitability of these lacustrine sediments as palaleoseismic archives. The investigations on the lacustrine outcrops in Spiti valley also provided information on changes in seasonality of precipitation and occurrence of frequent and intense periods (ca. 6.8-6.1 cal ka BP) of detrital influx indicating extreme hydrological events in the past. Regional comparison for this time slice indicates a possible extended “break-monsoon like” mode for the monsoon that favors enhanced precipitation over the Tibetan plateau, Himalayas and their foothills. My studies on surface sediments from Lonar lake helped to identify environmentally sensitive proxies which could also be used to interpret palaeodata obtained from a ca. 10m long core raised from the lake in 2008. The core encompasses the entire Holocene and is the first well dated (by 14C) archive from the core monsoon zone of central India. My identification of authigenic evaporite gaylussite crystals within the core sediments provided evidence of exceptionally drier conditions during 4.7-3.9 and 2.0-0.5 cal ka BP. Additionally, isotopic investigations on these crystals provided information on eutrophication, stratification, and carbon cycling processes in the lake.
This paper develops a spatial model to analyze the stability of a market sharing agreement between two firms. We find that the stability of the cartel depends on the relative market size of each firm. Collusion is not attractive for firms with a small home market, but the incentive for collusion increases when the firm’s home market is getting larger relative to the home market of the competitor. The highest stability of a cartel and additionally the highest social welfare is found when regions are symmetric. Further we can show that a monetary transfer can stabilize the market sharing agreement.
Carbohydrate recognition is a ubiquitous principle underlying many fundamental biological processes like fertilization, embryogenesis and viral infections. But how carbohydrate specificity and affinity induce a molecular event is not well understood. One of these examples is bacteriophage P22 that binds and infects three distinct Salmonella enterica (S.) hosts. It recognizes and depolymerizes repetitive carbohydrate structures of O antigen in its host´s outer membrane lipopolysaccharide molecule. This is mediated by tailspikes, mainly β helical appendages on phage P22 short non contractile tail apparatus (podovirus). The O antigen of all three Salmonella enterica hosts is built from tetrasaccharide repeating units consisting of an identical main chain with a distinguished 3,6 dideoxyhexose substituent that is crucial for P22 tailspike recognition: tyvelose in S. Enteritidis, abequose in S. Typhimurium and paratose in S. Paratyphi. In the first study the complexes of P22 tailspike with its host’s O antigen octasaccharide were characterized. S. Paratyphi octasaccharide binds less tightly (ΔΔG≈7 kJ/mol) to the tailspike than the other two hosts. Crystal structure analysis of P22 tailspike co crystallized with S. Paratyphi octasaccharides revealed different interactions than those observed before in tailspike complexes with S. Enteritidis and S. Typhimurium octasaccharides. These different interactions occur due to a structural rearrangement in the S. Paratyphi octasaccharide. It results in an unfavorable glycosidic bond Φ/Ψ angle combination that also had occurred when the S. Paratyphi octasaccharide conformation was analyzed in an aprotic environment. Contributions of individual protein surface contacts to binding affinity were analyzed showing that conserved structural waters mediate specific recognition of all three different Salmonella host O antigens. Although different O antigen structures possess distinct binding behavior on the tailspike surface, all are recognized and infected by phage P22. Hence, in a second study, binding measurements revealed that multivalent O antigen was able to bind with high avidity to P22 tailspike. Dissociation rates of the polymer were three times slower than for an octasaccharide fragment pointing towards high affinity for O antigen polysaccharide. Furthermore, when phage P22 was incubated with lipopolysaccharide aggregates before plating on S. Typhimurium cells, P22 infectivity became significantly reduced. Therefore, in a third study, the function of carbohydrate recognition on the infection process was characterized. It was shown that large S. Typhimurium lipopolysaccharide aggregates triggered DNA release from the phage capsid in vitro. This provides evidence that phage P22 does not use a second receptor on the Salmonella surface for infection. P22 tailspike binding and cleavage activity modulate DNA egress from the phage capsid. DNA release occurred more slowly when the phage possessed mutant tailspikes with less hydrolytic activity and was not induced if lipopolysaccharides contained tailspike shortened O antigen polymer. Furthermore, the onset of DNA release was delayed by tailspikes with reduced binding affinity. The results suggest a model for P22 infection induced by carbohydrate recognition: tailspikes position the phage on Salmonella enterica and their hydrolytic activity forces a central structural protein of the phage assembly, the plug protein, onto the host´s membrane surface. Upon membrane contact, a conformational change has to occur in the assembly to eject DNA and pilot proteins from the phage to establish infection. Earlier studies had investigated DNA ejection in vitro solely for viruses with long non contractile tails (siphovirus) recognizing protein receptors. Podovirus P22 in this work was therefore the first example for a short tailed phage with an LPS recognition organelle that can trigger DNA ejection in vitro. However, O antigen binding and cleaving tailspikes are widely distributed in the phage biosphere, for example in siphovirus 9NA. Crystal structure analysis of 9NA tailspike revealed a complete similar fold to P22 tailspike although they only share 36 % sequence identity. Moreover, 9NA tailspike possesses similar enzyme activity towards S. Typhimurium O antigen within conserved amino acids. These are responsible for a DNA ejection process from siphovirus 9NA triggered by lipopolysaccharide aggregates. 9NA expelled its DNA 30 times faster than podovirus P22 although the associated conformational change is controlled with a similar high activation barrier. The difference in DNA ejection velocity mirrors different tail morphologies and their efficiency to translate a carbohydrate recognition signal into action.
Asymptotic solutions of the Dirichlet problem for the heat equation at a characteristic point
(2012)
The Dirichlet problem for the heat equation in a bounded domain is characteristic, for there are boundary points at which the boundary touches a characteristic hyperplane t = c, c being a constant. It was I.G. Petrovskii (1934) who first found necessary and sufficient conditions on the boundary which guarantee that the solution is continuous up to the characteristic point, provided that the Dirichlet data are continuous. This paper initiated standing interest in studying general boundary value problems for parabolic equations in bounded domains. We contribute to the study by constructing a formal solution of the Dirichlet problem for the heat equation in a neighbourhood of a characteristic boundary point and showing its asymptotic character.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
MHC genes encode proteins that are responsible for the recognition of foreign antigens and the triggering of a subsequent, adequate immune response of the organism. Thus they hold a key position in the immune system of vertebrates. It is believed that the extraordinary genetic diversity of MHC genes is shaped by adaptive selectional processes in response to the reoccurring adaptations of parasites and pathogens. A large number of MHC studies were performed in a wide range of wildlife species aiming to understand the role of immune gene diversity in parasite resistance under natural selection conditions. Methodically, most of this work with very few exceptions has focussed only upon the structural, i.e. sequence diversity of regions responsible for antigen binding and presentation. Most of these studies found evidence that MHC gene variation did indeed underlie adaptive processes and that an individual’s allelic diversity explains parasite and pathogen resistance to a large extent. Nevertheless, our understanding of the effective mechanisms is incomplete. A neglected, but potentially highly relevant component concerns the transcriptional differences of MHC alleles. Indeed, differences in the expression levels MHC alleles and their potential functional importance have remained unstudied. The idea that also transcriptional differences might play an important role relies on the fact that lower MHC gene expression is tantamount with reduced induction of CD4+ T helper cells and thus with a reduced immune response. Hence, I studied the expression of MHC genes and of immune regulative cytokines as additional factors to reveal the functional importance of MHC diversity in two free-ranging rodent species (Delomys sublineatus, Apodemus flavicollis) in association with their gastrointestinal helminths under natural selection conditions. I established the method of relative quantification of mRNA on liver and spleen samples of both species in our laboratory. As there was no available information on nucleic sequences of potential reference genes in both species, PCR primer systems that were established in laboratory mice have to be tested and adapted for both non-model organisms. In the due course, sets of stable reference genes for both species were found and thus the preconditions for reliable measurements of mRNA levels established. For D. sublineatus it could be demonstrated that helminth infection elicits aspects of a typical Th2 immune response. Whereas mRNA levels of the cytokine interleukin Il4 increased with infection intensity by strongyle nematodes neither MHC nor cytokine expression played a significant role in D. sublineatus. For A. flavicollis I found a negative association between the parasitic nematode Heligmosomoides polygyrus and hepatic MHC mRNA levels. As a lower MHC expression entails a lower immune response, this could be evidence for an immune evasive strategy of the nematode, as it has been suggested for many micro-parasites. This implies that H. polygyrus is capable to interfere actively with the MHC transcription. Indeed, this parasite species has long been suspected to be immunosuppressive, e.g. by induction of regulatory T-helper cells that respond with a higher interleukin Il10 and tumor necrosis factor Tgfb production. Both cytokines in turn cause an abated MHC expression. By disabling recognition by the MHC molecule H. polygyrus might be able to prevent an activation of the immune system. Indeed, I found a strong tendency in animals carrying the allele Apfl-DRB*23 to have an increased infection intensity with H. polygyrus. Furthermore, I found positive and negative associations between specific MHC alleles and other helminth species, as well as typical signs of positive selection acting on the nucleic sequences of the MHC. The latter was evident by an elevated rate of non-synonymous to synonymous substitutions in the MHC sequences of exon 2 encoding the functionally important antigen binding sites whereas the first and third exons of the MHC DRB gene were highly conserved. In conclusion, the studies in this thesis demonstrate that valid procedures to quantify expression of immune relevant genes are also feasible in non-model wildlife organisms. In addition to structural MHC diversity, also MHC gene expression should be considered to obtain a more complete picture on host-pathogen coevolutionary selection processes. This is especially true if parasites are able to interfere with systemic MHC expression. In this case advantageous or disadvantageous effects of allelic binding motifs are abated. The studies could not define the role of MHC gene expression in antagonistic coevolution as such but the results suggest that it depends strongly on the specific parasite species that is involved.
Developing Critical Thinking
(2012)
Developing critical thinking
(2012)
Experimental and quantitative research in the field of human language processing and production strongly depends on the quality of the underlying language material: beside its size, representativeness, variety and balance have been discussed as important factors which influence design, analysis and interpretation of experiments and their results. This volume brings together creators and users of both general purpose and specialized lexical resources which are used in psychology, psycholinguistics, neurolinguistics and cognitive research. It aims to be a forum to report experiences and results, review problems and discuss perspectives of any linguistic data used in the field.
Complex networks have been successfully employed to represent different levels of biological systems, ranging from gene regulation to protein-protein interactions and metabolism. Network-based research has mainly focused on identifying unifying structural properties, including small average path length, large clustering coefficient, heavy-tail degree distribution, and hierarchical organization, viewed as requirements for efficient and robust system architectures. Existing studies estimate the significance of network properties using a generic randomization scheme - a Markov-chain switching algorithm - which generates unrealistic reactions in metabolic networks, as it does not account for the physical principles underlying metabolism. Therefore, it is unclear whether the properties identified with this generic approach are related to the functions of metabolic networks. Within this doctoral thesis, I have developed an algorithm for mass-balanced randomization of metabolic networks, which runs in polynomial time and samples networks almost uniformly at random. The properties of biological systems result from two fundamental origins: ubiquitous physical principles and a complex history of evolutionary pressure. The latter determines the cellular functions and abilities required for an organism’s survival. Consequently, the functionally important properties of biological systems result from evolutionary pressure. By employing randomization under physical constraints, the salient structural properties, i.e., the smallworld property, degree distributions, and biosynthetic capabilities of six metabolic networks from all kingdoms of life are shown to be independent of physical constraints, and thus likely to be related to evolution and functional organization of metabolism. This stands in stark contrast to the results obtained from the commonly applied switching algorithm. In addition, a novel network property is devised to quantify the importance of reactions by simulating the impact of their knockout. The relevance of the identified reactions is verified by the findings of existing experimental studies demonstrating the severity of the respective knockouts. The results suggest that the novel property may be used to determine the reactions important for viability of organisms. Next, the algorithm is employed to analyze the dependence between mass balance and thermodynamic properties of Escherichia coli metabolism. The thermodynamic landscape in the vicinity of the metabolic network reveals two regimes of randomized networks: those with thermodynamically favorable reactions, similar to the original network, and those with less favorable reactions. The results suggest that there is an intrinsic dependency between thermodynamic favorability and evolutionary optimization. The method is further extended to optimizing metabolic pathways by introducing novel chemically feasibly reactions. The results suggest that, in three organisms of biotechnological importance, introduction of the identified reactions may allow for optimizing their growth. The approach is general and allows identifying chemical reactions which modulate the performance with respect to any given objective function, such as the production of valuable compounds or the targeted suppression of pathway activity. These theoretical developments can find applications in metabolic engineering or disease treatment. The developed randomization method proposes a novel approach to measuring the significance of biological network properties, and establishes a connection between large-scale approaches and biological function. The results may provide important insights into the functional principles of metabolic networks, and open up new possibilities for their engineering.
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
Restrictions on addition
(2012)
Children up to school age have been reported to perform poorly when interpreting sentences containing restrictive and additive focus particles by treating sentences with a focus particle in the same way as sentences without it. Careful comparisons between results of previous studies indicate that this phenomenon is less pronounced for restrictive than for additive particles. We argue that this asymmetry is an effect of the presuppositional status of the proposition triggered by the additive particle. We tested this in two experiments with German-learning three-and four-year-olds using a method that made the exploitation of the information provided by the particles highly relevant for completing the task. Three-year-olds already performed remarkably well with sentences both with auch 'also' and with nur 'only'. Thus, children can consider the presuppositional contribution of the additive particle in their sentence interpretation and can exploit the restrictive particle as a marker of exhaustivity.
Relating to students
(2012)
1. The Assignment 'Devotion to Religion and acitive Citizenship' 2. The Assignment 'How are religious spread across Europe' 3. The Assignment 'Is football as important as religion?' 4. The Assignment 'Why be religious?' 5. The Assignment 'Lucky charms' 6. The Assignment 'No Creo en el Jamas' (Life after death) 7. The Assignment 'Religion and its influence on politics ans policies' 8. The Assignment 'Secularisation in Europe' 9. The Assignment 'The meaning of religious places' 10. The Assignment 'Unity in diversity' 11. Which conceptions did you find?
We introduce a theoretical framework for performing statistical hypothesis testing simultaneously over a fairly general, possibly uncountably infinite, set of null hypotheses. This extends the standard statistical setting for multiple hypotheses testing, which is restricted to a finite set. This work is motivated by numerous modern applications where the observed signal is modeled by a stochastic process over a continuum. As a measure of type I error, we extend the concept of false discovery rate (FDR) to this setting. The FDR is defined as the average ratio of the measure of two random sets, so that its study presents some challenge and is of some intrinsic mathematical interest. Our main result shows how to use the p-value process to control the FDR at a nominal level, either under arbitrary dependence of p-values, or under the assumption that the finite dimensional distributions of the p-value process have positive correlations of a specific type (weak PRDS). Both cases generalize existing results established in the finite setting, the latter one leading to a less conservative procedure. The interest of this approach is demonstrated in several non-parametric examples: testing the mean/signal in a Gaussian white noise model, testing the intensity of a Poisson process and testing the c.d.f. of i.i.d. random variables. Conceptually, an interesting feature of the setting advocated here is that it focuses directly on the intrinsic hypothesis space associated with a testing model on a random process, without referring to an arbitrary discretization.
The authors discuss the use of the discrepancy principle for statistical inverse problems, when the underlying operator is of trace class. Under this assumption the discrepancy principle is well defined, however a plain use of it may occasionally fail and it will yield sub-optimal rates. Therefore, a modification of the discrepancy is introduced, which takes into account both of the above deficiencies. For a variety of linear regularization schemes as well as for conjugate gradient iteration this modification is shown to yield order optimal a priori error bounds under general smoothness assumptions. A posteriori error control is also possible, however at a sub-optimal rate, in general. This study uses and complements previous results for bounded deterministic noise.
Metals are often used in environments that are conducive to corrosion, which leads to a reduction in their mechanical properties and durability. Coatings are applied to corrosion-prone metals such as aluminum alloys to inhibit the destructive surface process of corrosion in a passive or active way. Standard anticorrosive coatings function as a physical barrier between the material and the corrosive environment and provide passive protection only when intact. In contrast, active protection prevents or slows down corrosion even when the main barrier is damaged. The most effective industrially used active corrosion inhibition for aluminum alloys is provided by chromate conversion coatings. However, their toxicity and worldwide restriction provoke an urgent need for finding environmentally friendly corrosion preventing systems. A promising approach to replace the toxic chromate coatings is to embed particles containing nontoxic inhibitor in a passive coating matrix. This work presents the development and optimization of effective anticorrosive coatings for the industrially important aluminum alloy, AA2024-T3 using this approach. The protective coatings were prepared by dispersing mesoporous silica containers, loaded with the nontoxic corrosion inhibitor 2-mercaptobenzothiazole, in a passive sol-gel (SiOx/ZrOx) or organic water-based layer. Two types of porous silica containers with different sizes (d ≈ 80 and 700 nm, respectively) were investigated. The studied robust containers exhibit high surface area (≈ 1000 m² g-1), narrow pore size distribution (dpore ≈ 3 nm) and large pore volume (≈ 1 mL g-1) as determined by N2 sorption measurements. These properties favored the subsequent adsorption and storage of a relatively large amount of inhibitor as well as its release in response to pH changes induced by the corrosion process. The concentration, position and size of the embedded containers were varied to ascertain the optimum conditions for overall anticorrosion performance. Attaining high anticorrosion efficiency was found to require a compromise between delivering an optimal amount of corrosion inhibitor and preserving the coating barrier properties. This study broadens the knowledge about the main factors influencing the coating anticorrosion efficiency and assists the development of optimum active anticorrosive coatings doped with inhibitor loaded containers.
This thesis investigates the gradient flow of Dirac-harmonic maps. Dirac-harmonic maps are critical points of an energy functional that is motivated from supersymmetric field theories. The critical points of this energy functional couple the equation for harmonic maps with spinor fields. At present, many analytical properties of Dirac-harmonic maps are known, but a general existence result is still missing. In this thesis the existence question is studied using the evolution equations for a regularized version of Dirac-harmonic maps. Since the energy functional for Dirac-harmonic maps is unbounded from below the method of the gradient flow cannot be applied directly. Thus, we first of all consider a regularization prescription for Dirac-harmonic maps and then study the gradient flow. Chapter 1 gives some background material on harmonic maps/harmonic spinors and summarizes the current known results about Dirac-harmonic maps. Chapter 2 introduces the notion of Dirac-harmonic maps in detail and presents a regularization prescription for Dirac-harmonic maps. In Chapter 3 the evolution equations for regularized Dirac-harmonic maps are introduced. In addition, the evolution of certain energies is discussed. Moreover, the existence of a short-time solution to the evolution equations is established. Chapter 4 analyzes the evolution equations in the case that the domain manifold is a closed curve. Here, the existence of a smooth long-time solution is proven. Moreover, for the regularization being large enough, it is shown that the evolution equations converge to a regularized Dirac-harmonic map. Finally, it is discussed in which sense the regularization can be removed. In Chapter 5 the evolution equations are studied when the domain manifold is a closed Riemmannian spin surface. For the regularization being large enough, the existence of a global weak solution, which is smooth away from finitely many singularities is proven. It is shown that the evolution equations converge weakly to a regularized Dirac-harmonic map. In addition, it is discussed if the regularization can be removed in this case.
Since available phosphate (Pi) resources in soil are limited, symbiotic interactions between plant roots and arbuscular mycorrhizal (AM) fungi are a widespread strategy to improve plant phosphate nutrition. The repression of AM symbiosis by a high plant Pi-status indicates a link between Pi homeostasis signalling and AM symbiosis development. This assumption is supported by the systemic induction of several microRNA399 (miR399) primary transcripts in shoots and a simultaneous accumulation of mature miR399 in roots of mycorrhizal plants. However, the physiological role of this miR399 expression pattern is still elusive and offers the question whether other miRNAs are also involved in AM symbiosis. Therefore, a deep sequencing approach was applied to investigate miRNA-mediated posttranscriptional gene regulation in M. truncatula mycorrhizal roots. Degradome analysis revealed that 185 transcripts were cleaved by miRNAs, of which the majority encoded transcription factors and disease resistance genes, suggesting a tight control of transcriptional reprogramming and a downregulation of defence responses by several miRNAs in mycorrhizal roots. Interestingly, 45 of the miRNA-cleaved transcripts showed a significant differentially regulated between mycorrhizal and non-mycorrhizal roots. In addition, key components of the Pi homeostasis signalling pathway were analyzed concerning their expression during AM symbiosis development. MtPhr1 overexpression and time course expression data suggested a strong interrelation between the components of the PHR1-miR399-PHO2 signalling pathway and AM symbiosis, predominantly during later stages of symbiosis. In situ hybridizations confirmed accumulation of mature miR399 in the phloem and in arbuscule-containing cortex cells of mycorrhizal roots. Moreover, a novel target of the miR399 family, named as MtPt8, was identified by the above mentioned degradome analysis. MtPt8 encodes a Pi-transporter exclusively transcribed in mycorrhizal roots and its promoter activity was restricted to arbuscule-containing cells. At a low Pi-status, MtPt8 transcript abundance inversely correlated with a mature miR399 expression pattern. Increased MtPt8 transcript levels were accompanied by elevated symbiotic Pi-uptake efficiency, indicating its impact on balancing plant and fungal Pi-acquisition. In conclusion, this study provides evidence for a direct link of the regulatory mechanisms of plant Pi-homeostasis and AM symbiosis at a cell-specific level. The results of this study, especially the interaction of miR399 and MtPt8 provide a fundamental step for future studies of plant-microbe-interactions with regard to agricultural and ecological aspects.
Cellulose is the most abundant biopolymer on earth and the main load-bearing structure in plant cell walls. Cellulose microfibrils are laid down in a tight parallel array, surrounding plant cells like a corset. Orientation of microfibrils determines the direction of growth by directing turgor pressure to points of expansion (Somerville et al., 2004). Hence, cellulose deficient mutants usually show cell and organ swelling due to disturbed anisotropic cell expansion (reviewed in Endler and Persson, 2011). How do cellulose microfibrils gain their parallel orientation? First experiments in the 1960s suggested, that cortical microtubules aid the cellulose synthases on their way around the cell (Green, 1962; Ledbetter and Porter, 1963). This was proofed in 2006 through life cell imaging (Paredez et al., 2006). However, how this guidance was facilitated, remained unknown. Through a combinatory approach, including forward and reverse genetics together with advanced co-expression analysis, we identified pom2 as a cellulose deficient mutant. Map- based cloning revealed that the gene locus of POM2 corresponded to CELLULOSE SYNTHASE INTERACTING 1 (CSI1). Intriguingly, we previously found the CSI1 protein to interact with the putative cytosolic part of the primary cellulose synthases in a yeast-two-hybrid screen (Gu et al., 2010). Exhaustive cell biological analysis of the POM2/CSI1 protein allowed to determine its cellular function. Using spinning disc confocal microscopy, we could show that in the absence of POM2/CSI1, cellulose synthase complexes lose their microtubule-dependent trajectories in the plasma membrane. The loss of POM2/CSI1, however does not influence microtubule- dependent delivery of cellulose synthases (Bringmann et al., 2012). Consequently, POM2/CSI1 acts as a bridging protein between active cellulose synthases and cortical microtubules. This thesis summarizes three publications of the author, regarding the identification of proteins that connect cellulose synthases to the cytoskeleton. This involves the development of bioinformatics tools allowing candidate gene prediction through co-expression studies (Mutwil et al., 2009), identification of candidate genes through interaction studies (Gu et al., 2010), and determination of the cellular function of the candidate gene (Bringmann et al., 2012).
The EVE curriculum framework
(2012)
In many applications one is faced with the problem of inferring some functional relation between input and output variables from given data. Consider, for instance, the task of email spam filtering where one seeks to find a model which automatically assigns new, previously unseen emails to class spam or non-spam. Building such a predictive model based on observed training inputs (e.g., emails) with corresponding outputs (e.g., spam labels) is a major goal of machine learning. Many learning methods assume that these training data are governed by the same distribution as the test data which the predictive model will be exposed to at application time. That assumption is violated when the test data are generated in response to the presence of a predictive model. This becomes apparent, for instance, in the above example of email spam filtering. Here, email service providers employ spam filters and spam senders engineer campaign templates such as to achieve a high rate of successful deliveries despite any filters. Most of the existing work casts such situations as learning robust models which are unsusceptible against small changes of the data generation process. The models are constructed under the worst-case assumption that these changes are performed such to produce the highest possible adverse effect on the performance of the predictive model. However, this approach is not capable to realistically model the true dependency between the model-building process and the process of generating future data. We therefore establish the concept of prediction games: We model the interaction between a learner, who builds the predictive model, and a data generator, who controls the process of data generation, as an one-shot game. The game-theoretic framework enables us to explicitly model the players' interests, their possible actions, their level of knowledge about each other, and the order at which they decide for an action. We model the players' interests as minimizing their own cost function which both depend on both players' actions. The learner's action is to choose the model parameters and the data generator's action is to perturbate the training data which reflects the modification of the data generation process with respect to the past data. We extensively study three instances of prediction games which differ regarding the order in which the players decide for their action. We first assume that both player choose their actions simultaneously, that is, without the knowledge of their opponent's decision. We identify conditions under which this Nash prediction game has a meaningful solution, that is, a unique Nash equilibrium, and derive algorithms that find the equilibrial prediction model. As a second case, we consider a data generator who is potentially fully informed about the move of the learner. This setting establishes a Stackelberg competition. We derive a relaxed optimization criterion to determine the solution of this game and show that this Stackelberg prediction game generalizes existing prediction models. Finally, we study the setting where the learner observes the data generator's action, that is, the (unlabeled) test data, before building the predictive model. As the test data and the training data may be governed by differing probability distributions, this scenario reduces to learning under covariate shift. We derive a new integrated as well as a two-stage method to account for this data set shift. In case studies on email spam filtering we empirically explore properties of all derived models as well as several existing baseline methods. We show that spam filters resulting from the Nash prediction game as well as the Stackelberg prediction game in the majority of cases outperform other existing baseline methods.
This thesis aims at a better mechanistic understanding of animal communities. Therefore, an allometry- and individual-based model has been developed which was used to simulate mammal and bird communities in heterogeneous landscapes, and to to better understand their response to landscape changes (habitat loss and fragmentation).
We introduce renormalized integrals which generalize conventional measure theoretic integrals. One approximates the integration domain by measure spaces and defines the integral as the limit of integrals over the approximating spaces. This concept is implicitly present in many mathematical contexts such as Cauchy's principal value, the determinant of operators on a Hilbert space and the Fourier transform of an L^p function. We use renormalized integrals to define a path integral on manifolds by approximation via geodesic polygons. The main part of the paper is dedicated to the proof of a path integral formula for the heat kernel of any self-adjoint generalized Laplace operator acting on sections of a vector bundle over a compact Riemannian manifold.
A linear differential operator L is called weakly hypoelliptic if any local solution u of Lu = 0 is smooth. We allow for systems, i.e. the coefficients may be matrices, not necessarily of square size. This is a huge class of important operators which covers all elliptic, overdetermined elliptic, subelliptic and parabolic equations. We extend several classical theorems from complex analysis to solutions of any weakly hypoelliptic equation: the Montel theorem providing convergent subsequences, the Vitali theorem ensuring convergence of a given sequence, and Riemann's first removable singularity theorem. In the case of constant coefficients we show that Liouville's theorem holds, any bounded solution must be constant and any L^p solution must vanish.
We study boundary value problems for linear elliptic differential operators of order one. The underlying manifold may be noncompact, but the boundary is assumed to be compact. We require a symmetry property of the principal symbol of the operator along the boundary. This is satisfied by Dirac type operators, for instance. We provide a selfcontained introduction to (nonlocal) elliptic boundary conditions, boundary regularity of solutions, and index theory. In particular, we simplify and generalize the traditional theory of elliptic boundary value problems for Dirac type operators. We also prove a related decomposition theorem, a general version of Gromov and Lawson's relative index theorem and a generalization of the cobordism theorem.
This is an introduction to Wiener measure and the Feynman-Kac formula on general Riemannian manifolds for Riemannian geometers with little or no background in stochastics. We explain the construction of Wiener measure based on the heat kernel in full detail and we prove the Feynman-Kac formula for Schrödinger operators with bounded potentials. We also consider normal Riemannian coverings and show that projecting and lifting of paths are inverse operations which respect the Wiener measure.
The reaction of the German labor market to the Great Recession 2008/09 was relatively mild – especially compared to other countries. The reason lies not only in the specific type of the recession – which was favorable for the German economy structure – but also in a series of labor market reforms initiated between 2002 and 2005 altering, inter alia, labor supply incentives. However, irrespective of the mild response to the Great Recession, there are a number of substantial future challenges the German labor market will soon have to face. Female labor supply still lies well below that of other countries and a massive demographic change over the next 50 years will have substantial effects on labor supply as well as the pension system. In addition, due to a skill-biased technological change over the next decades, firms will face problems of finding employees with adequate skills. The aim of this paper is threefold. First, we outline why the German labor market reacted in such a mild fashion, describe current economic trends of the labor market in light of general trends in the European Union, and reveal some of the main associated challenges. Thereafter, the paper analyzes recent reforms of the main institutional settings of the labor market which influence labor supply. Finally, based on the status quo of these institutional settings, the paper gives a brief overview of strategies to combat adequately the challenges in terms of labor supply and to ensure economic growth in the future.
Tectonic and geological processes on Earth often result in structural anisotropy of the subsurface, which can be imaged by various geophysical methods. In order to achieve appropriate and realistic Earth models for interpretation, inversion algorithms have to allow for an anisotropic subsurface. Within the framework of this thesis, I analyzed a magnetotelluric (MT) data set taken from the Cape Fold Belt in South Africa. This data set exhibited strong indications for crustal anisotropy, e.g. MT phases out of the expected quadrant, which are beyond of fitting and interpreting with standard isotropic inversion algorithms. To overcome this obstacle, I have developed a two-dimensional inversion method for reconstructing anisotropic electrical conductivity distributions. The MT inverse problem represents in general a non-linear and ill-posed minimization problem with many degrees of freedom: In isotropic case, we have to assign an electrical conductivity value to each cell of a large grid to assimilate the Earth's subsurface, e.g. a grid with 100 x 50 cells results in 5000 unknown model parameters in an isotropic case; in contrast, we have the sixfold in an anisotropic scenario where the single value of electrical conductivity becomes a symmetric, real-valued tensor while the number of the data remains unchanged. In order to successfully invert for anisotropic conductivities and to overcome the non-uniqueness of the solution of the inverse problem it is necessary to use appropriate constraints on the class of allowed models. This becomes even more important as MT data is not equally sensitive to all anisotropic parameters. In this thesis, I have developed an algorithm through which the solution of the anisotropic inversion problem is calculated by minimization of a global penalty functional consisting of three entries: the data misfit, the model roughness constraint and the anisotropy constraint. For comparison, in an isotropic approach only the first two entries are minimized. The newly defined anisotropy term is measured by the sum of the square difference of the principal conductivity values of the model. The basic idea of this constraint is straightforward. If an isotropic model is already adequate to explain the data, there is no need to introduce electrical anisotropy at all. In order to ensure successful inversion, appropriate trade-off parameters, also known as regularization parameters, have to be chosen for the different model constraints. Synthetic tests show that using fixed trade-off parameters usually causes the inversion to end up by either a smooth model with large RMS error or a rough model with small RMS error. Using of a relaxation approach on the regularization parameters after each successful inversion iteration will result in smoother inversion model and a better convergence. This approach seems to be a sophisticated way for the selection of trade-off parameters. In general, the proposed inversion method is adequate for resolving the principal conductivities defined in horizontal plane. Once none of the principal directions of the anisotropic structure is coincided with the predefined strike direction, only the corresponding effective conductivities, which is the projection of the principal conductivities onto the model coordinate axes direction, can be resolved and the information about the rotation angles is lost. In the end the MT data from the Cape Fold Belt in South Africa has been analyzed. The MT data exhibits an area (> 10 km) where MT phases over 90 degrees occur. This part of data cannot be modeled by standard isotropic modeling procedures and hence can not be properly interpreted. The proposed inversion method, however, could not reproduce the anomalous large phases as desired because of losing the information about rotation angles. MT phases outside the first quadrant are usually obtained by different anisotropic anomalies with oblique anisotropy strike. In order to achieve this challenge, the algorithm needs further developments. However, forward modeling studies with the MT data have shown that surface highly conductive heterogeneity in combination with a mid-crustal electrically anisotropic zone are required to fit the data. According to known geological and tectonic information the mid-crustal zone is interpreted as a deep aquifer related to the fractured Table Mountain Group rocks in the Cape Fold Belt.
This article examines two so-far-understudied verb doubling constructions in Mandarin Chinese, viz., verb doubling clefts and verb doubling lian…dou. We show that these constructions have the same internal syntax as regular clefts and lian…dou sentences, the doubling effect being epiphenomenal; therefore, we classify them as subtypes of the general cleft and lian…dou constructions, respectively, rather than as independent constructions. Additionally, we also show that, as in many other languages with comparable constructions, the two instances of the verb are part of a single movement chain, which has the peculiarity of allowing Spell-Out of more than one link.
This paper explores questions surrounding corporeality and heavenly ascent, in texts ranging from 1 Enoch to the Hekhalot literature, including Philo’s works. It examines both descriptions of the heavenly realms and accounts of the ascent process. Despite his Platonic apophaticism, Philo superimposes cosmological and spiritual heavens, and draws upon the biblical imagery of dazzling glory. Although they do not express themselves in philosophical language, the heavenly ascent texts make it clear that human beings cannot ascend to heaven in their earthly bodies, and that God cannot be seen with terrestrial eyes. In terms of ideas they are not so far from the philosopher Philo as might at first appear.
One of the major problems for the implementation of water resources planning and management in arid and semi-arid environments is the scarcity of hydrological data and, consequently, research studies. In this thesis, the hydrology of dryland river systems was analyzed and a semi-distributed hydrological model and a forecasting approach were developed for flow transmission processes in river-systems with a focus on semi-arid conditions. Three different sources of hydrological data (streamflow series, groundwater level series and multi-temporal satellite data) were combined in order to analyze the channel transmission losses of a large reach of the Jaguaribe River in NE Brazil. A perceptual model of this reach was derived suggesting that the application of models, which were developed for sub-humid and temperate regions, may be more suitable for this reach than classical models, which were developed for arid and semi-arid regions. Summarily, it was shown that this river reach is hydraulically connected with groundwater and shifts from being a losing river at the dry and beginning of rainy seasons to become a losing/gaining (mostly losing) river at the middle and end of rainy seasons. A new semi-distributed channel transmission losses model was developed, which was based primarily on the capability of simulation in very different dryland environments and flexible model structures for testing hypotheses on the dominant hydrological processes of rivers. This model was successfully tested in a large reach of the Jaguaribe River in NE Brazil and a small stream in the Walnut Gulch Experimental Watershed in the SW USA. Hypotheses on the dominant processes of the channel transmission losses (different model structures) in the Jaguaribe river were evaluated, showing that both lateral (stream-)aquifer water fluxes and ground-water flow in the underlying alluvium parallel to the river course are necessary to predict streamflow and channel transmission losses, the former process being more relevant than the latter. This procedure not only reduced model structure uncertainties, but also reported modelling failures rejecting model structure hypotheses, namely streamflow without river-aquifer interaction and stream-aquifer flow without groundwater flow parallel to the river course. The application of the model to different dryland environments enabled learning about the model itself from differences in channel reach responses. For example, the parameters related to the unsaturated part of the model, which were active for the small reach in the USA, presented a much greater variation in the sensitivity coefficients than those which drove the saturated part of the model, which were active for the large reach in Brazil. Moreover, a nonparametric approach, which dealt with both deterministic evolution and inherent fluctuations in river discharge data, was developed based on a qualitative dynamical system-based criterion, which involved a learning process about the structure of the time series, instead of a fitting procedure only. This approach, which was based only on the discharge time series itself, was applied to a headwater catchment in Germany, in which runoff are induced by either convective rainfall during the summer or snow melt in the spring. The application showed the following important features: • the differences between runoff measurements were more suitable than the actual runoff measurements when using regression models; • the catchment runoff system shifted from being a possible dynamical system contaminated with noise to a linear random process when the interval time of the discharge time series increased; • and runoff underestimation can be expected for rising limbs and overestimation for falling limbs. This nonparametric approach was compared with a distributed hydrological model designed for real-time flood forecasting, with both presenting similar results on average. Finally, a benchmark for hydrological research using semi-distributed modelling was proposed, based on the aforementioned analysis, modelling and forecasting of flow transmission processes. The aim of this benchmark was not to describe a blue-print for hydrological modelling design, but rather to propose a scientific method to improve hydrological knowledge using semi-distributed hydrological modelling. Following the application of the proposed benchmark to a case study, the actual state of its hydrological knowledge and its predictive uncertainty can be determined, primarily through rejected hypotheses on the dominant hydrological processes and differences in catchment/variables responses.
Climate is the principal driving force of hydrological extremes like floods and attributing generating mechanisms is an essential prerequisite for understanding past, present, and future flood variability. Successively enhanced radiative forcing under global warming enhances atmospheric water-holding capacity and is expected to increase the likelihood of strong floods. In addition, natural climate variability affects the frequency and magnitude of these events on annual to millennial time-scales. Particularly in the mid-latitudes of the Northern Hemisphere, correlations between meteorological variables and hydrological indices suggest significant effects of changing climate boundary conditions on floods. To date, however, understanding of flood responses to changing climate boundary conditions is limited due to the scarcity of hydrological data in space and time. Exploring paleoclimate archives like annually laminated (varved) lake sediments allows to fill this gap in knowledge offering precise dated time-series of flood variability for millennia. During river floods, detrital catchment material is eroded and transported in suspension by fluid turbulence into downstream lakes. In the water body the transport capacity of the inflowing turbidity current successively diminishes leading to the deposition of detrital layers on the lake floor. Intercalated into annual laminations these detrital layers can be dated down to seasonal resolution. Microfacies analyses and X-ray fluorescence scanning (µ-XRF) at 200 µm resolution were conducted on the varved Mid- to Late Holocene interval of two sediment profiles from pre-alpine Lake Ammersee (southern Germany) located in a proximal (AS10prox) and distal (AS10dist) position towards the main tributary River Ammer. To shed light on sediment distribution within the lake, particular emphasis was (1) the detection of intercalated detrital layers and their micro-sedimentological features, and (2) intra-basin correlation of these deposits. Detrital layers were dated down to the season by microscopic varve counting and determination of the microstratigraphic position within a varve. The resulting chronology is verified by accelerator mass spectrometry (AMS) 14C dating of 14 terrestrial plant macrofossils. Since ~5500 varve years before present (vyr BP), in total 1573 detrital layers were detected in either one or both of the investigated sediment profiles. Based on their microfacies, geochemistry, and proximal-distal deposition pattern, detrital layers were interpreted as River Ammer flood deposits. Calibration of the flood layer record using instrumental daily River Ammer runoff data from AD 1926 to 1999 proves the flood layer succession to represent a significant time-series of major River Ammer floods in spring and summer, the flood season in the Ammersee region. Flood layer frequency trends are in agreement with decadal variations of the East Atlantic-Western Russia (EA-WR) atmospheric pattern back to 200 yr BP (end of the used atmospheric data) and solar activity back to 5500 vyr BP. Enhanced flood frequency corresponds to the negative EA-WR phase and reduced solar activity. These common links point to a central role of varying large-scale atmospheric circulation over Europe for flood frequency in the Ammersee region and suggest that these atmospheric variations, in turn, are likely modified by solar variability during the past 5500 years. Furthermore, the flood layer record indicates three shifts in mean layer thickness and frequency of different manifestation in both sediment profiles at ~5500, ~2800, and ~500 vyr BP. Combining information from both sediment profiles enabled to interpret these shifts in terms of stepwise increases in mean flood intensity. Likely triggers of these shifts are gradual reduction of Northern Hemisphere orbital summer forcing and long-term solar activity minima. Hypothesized atmospheric response to this forcing is hemispheric cooling that enhances equator-to-pole temperature gradients and potential energy in the troposphere. This energy is transferred into stronger westerly cyclones, more extreme precipitation, and intensified floods at Lake Ammersee. Interpretation of flood layer frequency and thickness data in combination with reanalysis models and time-series analysis allowed to reconstruct the flood history and to decipher flood triggering climate mechanisms in the Ammersee region throughout the past 5500 years. Flood frequency and intensity are not stationary, but influenced by multi-causal climate forcing of large-scale atmospheric modes on time-scales from years to millennia. These results challenge future projections that propose an increase in floods when Earth warms based only on the assumption of an enhanced hydrological cycle.
This study provides a detailed analysis of the mid-Holocene to present-day precipitation change in the Asian monsoon region. We compare for the first time results of high resolution climate model simulations with a standardised set of mid-Holocene moisture reconstructions. Changes in the simulated summer monsoon characteristics (onset, withdrawal, length and associated rainfall) and the mechanisms causing the Holocene precipitation changes are investigated. According to the model, most parts of the Indian subcontinent received more precipitation (up to 5 mm/day) at mid-Holocene than at present-day. This is related to a stronger Indian summer monsoon accompanied by an intensified vertically integrated moisture flux convergence. The East Asian monsoon region exhibits local inhomogeneities in the simulated annual precipitation signal. The sign of this signal depends on the balance of decreased pre-monsoon and increased monsoon precipitation at mid-Holocene compared to present-day. Hence, rainfall changes in the East Asian monsoon domain are not solely associated with modifications in the summer monsoon circulation but also depend on changes in the mid-latitudinal westerly wind system that dominates the circulation during the pre-monsoon season. The proxy-based climate reconstructions confirm the regional dissimilarities in the annual precipitation signal and agree well with the model results. Our results highlight the importance of including the pre-monsoon season in climate studies of the Asian monsoon system and point out the complex response of this system to the Holocene insolation forcing. The comparison with a coarse climate model simulation reveals that this complex response can only be resolved in high resolution simulations.
Theory of mRNA degradation
(2012)
One of the central themes of biology is to understand how individual cells achieve a high fidelity in gene expression. Each cell needs to ensure accurate protein levels for its proper functioning and its capability to proliferate. Therefore, complex regulatory mechanisms have evolved in order to render the expression of each gene dependent on the expression level of (all) other genes. Regulation can occur at different stages within the framework of the central dogma of molecular biology. One very effective and relatively direct mechanism concerns the regulation of the stability of mRNAs. All organisms have evolved diverse and powerful mechanisms to achieve this. In order to better comprehend the regulation in living cells, biochemists have studied specific degradation mechanisms in detail. In addition to that, modern high-throughput techniques allow to obtain quantitative data on a global scale by parallel analysis of the decay patterns of many different mRNAs from different genes. In previous studies, the interpretation of these mRNA decay experiments relied on a simple theoretical description based on an exponential decay. However, this does not account for the complexity of the responsible mechanisms and, as a consequence, the exponential decay is often not in agreement with the experimental decay patterns. We have developed an improved and more general theory of mRNA degradation which provides a general framework of mRNA expression and allows describing specific degradation mechanisms. We have made an attempt to provide detailed models for the regulation in different organisms. In the yeast S. cerevisiae, different degradation pathways are known to compete and furthermore most of them rely on the biochemical modification of mRNA molecules. In bacteria such as E. coli, degradation proceeds primarily endonucleolytically, i.e. it is governed by the initial cleavage within the coding region. In addition, it is often coupled to the level of maturity and the size of the polysome of an mRNA. Both for S. cerevisiae and E. coli, our descriptions lead to a considerable improvement of the interpretation of experimental data. The general outcome is that the degradation of mRNA must be described by an age-dependent degradation rate, which can be interpreted as a consequence of molecular aging of mRNAs. Within our theory, we find adequate ways to address this much debated topic from a theoretical perspective. The improvements of the understanding of mRNA degradation can be readily applied to further comprehend the mRNA expression under different internal or environmental conditions such as after the induction of transcription or stress application. Also, the role of mRNA decay can be assessed in the context of translation and protein synthesis. The ultimate goal in understanding gene regulation mediated by mRNA stability will be to identify the relevance and biological function of different mechanisms. Once more quantitative data will become available, our description allows to elaborate the role of each mechanism by devising a suitable model.
A discrete analogue of the Witten Laplacian on the n-dimensional integer lattice is considered. After rescaling of the operator and the lattice size we analyze the tunnel effect between different wells, providing sharp asymptotics of the low-lying spectrum. Our proof, inspired by work of B. Helffer, M. Klein and F. Nier in continuous setting, is based on the construction of a discrete Witten complex and a semiclassical analysis of the corresponding discrete Witten Laplacian on 1-forms. The result can be reformulated in terms of metastable Markov processes on the lattice.
Much of our knowledge about the solar dynamo is based on sunspot observations. It is thus desirable to extend the set of positional and morphological data of sunspots into the past. Gustav Spörer observed in Germany from Anklam (1861–1873) and Potsdam (1874–1894). He left detailed prints of sunspot groups, which we digitized and processed to mitigate artifacts left in the print by the passage of time. After careful geometrical correction, the sunspot data are now available as synoptic charts for almost 450 solar rotation periods. Individual sunspot positions can thus be precisely determined and spot areas can be accurately measured using morphological image processing techniques. These methods also allow us to determine tilt angles of active regions (Joy’s law) and to assess the complexity of an active region.